Floating Point Numbers (Part1: Fp vs Fixed) - Computerphile

  Рет қаралды 156,432

Computerphile

Computerphile

5 жыл бұрын

How much does a floating point processor improve floating point operations? Dr Bagley installed one to find out - and explains how computers store the bits.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 259
@K1RTB
@K1RTB 5 жыл бұрын
Computerphile: my main source of „nought“
@hexerei02021
@hexerei02021 3 жыл бұрын
nought button -> 10:25
@Roxor128
@Roxor128 5 жыл бұрын
There are other approaches, too. I remember reading an article about an investigation one guy did into "floating-bar numbers", which were a restricted form of rational numbers (fitting into 32 or 64 bits) where the number of bits for the numerator and denominator could vary, though would be limited to a total of 26 bits in the 32-bit implementation (the other 6 bits being used for the sign and bar position). Another approach being a logarithmic system, where numbers are stored as their logarithms. It has the advantage of multiplication, division, powers and roots being fast, but with the penalty of addition and subtraction being slow. The Yamaha OPL2 FM synthesis chip uses one internally, operating on log-transformed sine-wave samples, then uses a lookup table to convert to a linear form for output.
@FyberOptic
@FyberOptic 5 жыл бұрын
There's that horrible moment in any programmer's life when they realize that floating point calculations don't work on computers the way they work in real life, and all of your code suddenly has to be based around this fact.
@rwantare1
@rwantare1 5 жыл бұрын
My method: 1. Try long instead of float. 2. Accept a range for the correct answer and round it. 3. Give up and look up the stackoverflow question explaining how to do it
@nakitumizajashi4047
@nakitumizajashi4047 5 жыл бұрын
That's exactly why I use integers to do financial calculations (all amounts are expressed in cents).
@rwantare1
@rwantare1 5 жыл бұрын
@@nakitumizajashi4047 clearly you never have to divide.
@rcookie5128
@rcookie5128 5 жыл бұрын
Hahaha yeah
@theshermantanker7043
@theshermantanker7043 4 жыл бұрын
Analog Computers would help a lot
@Rchals
@Rchals 5 жыл бұрын
0.1 + 0.2 == 0.3 >>> False really was a great moment in my life
@bjornseine2342
@bjornseine2342 5 жыл бұрын
Had a similar moment with an assignment last year.... Had a calculation that was supposed to output an upper diagonal matrix (No idea wether it's called that in english, basically everything below the diagonal was supposed to be zero). Well, it wasn't.... Took me 1/2h+ to figure out that I was using floats and the entries were very close to zero, just not precisely. I felt quite stupid afterwards :D
@jhonnatanwalyston6645
@jhonnatanwalyston6645 5 жыл бұрын
python3 >>> 0.30000000000000004
@jhonnatanwalyston6645
@jhonnatanwalyston6645 5 жыл бұрын
round(0.1+0.2, 1) == 0.3 # quick-fix LOL
@platin2148
@platin2148 5 жыл бұрын
Ricard Miras Sadly it could have been already messed up by converting from ascii to float. Inside the lexer.
@EwanMarshall
@EwanMarshall 5 жыл бұрын
>>> from decimal import * >>> getcontext().prec = 28 >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.3') True >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.2') False
@echodeal7725
@echodeal7725 5 жыл бұрын
Floating Point numbers are two small Integers in a trenchcoat pretending to be a Real Number.
@g3i0r
@g3i0r 5 жыл бұрын
More like they're pretending to be a Rational Number.
@9999rav
@9999rav 5 жыл бұрын
@@g3i0r they are rational numbers... but they are pretending to be real instead
@Kezbardo
@Kezbardo 5 жыл бұрын
What's your name? Vincent! Uhh... Vincent... Realnumber!
@g3i0r
@g3i0r 4 жыл бұрын
@Username they can't represent all rational numbers either, hence my comment.
@proudsnowtiger
@proudsnowtiger 5 жыл бұрын
I've just been writing about the new RISC-V architecture, which is a modular ISA where integer, single precision and double precision maths instructions are design options. There are a lot of interesting tech, economic and ecosystem aspects to this project, which is an open-source competitor to ARM - would love to see your take on it.
@hrnekbezucha
@hrnekbezucha 5 жыл бұрын
Many embedded devices get by just fine with fixed point arithmetic to save cost of the MCU. RISC-V and ARM give people the option to include the floating point module. Another factor is speed. So even if you need floating point, you can do it in software and the calculation will take some 20 or however many clock cycles, while the FP module would do it in one cycle. Cpu architecture is really great topic, but probably not friendly for a bite-size video
@foobar879
@foobar879 5 жыл бұрын
Yeah RISC-V is really nice, can't wait for the vector extension to be implemented! Meanwhile i'll keep on fiddling on the k210 with the sipeed's boards.
@hrnekbezucha
@hrnekbezucha 5 жыл бұрын
@@robertrogers2388 Also, if you want to license a chip from ARM, they'll charge you a relatively hefty fee for each chip made. One more reason RISC-V gets so much traction lately. It's becoming more than a proof of concept.
@floriandonhauser2383
@floriandonhauser2383 5 жыл бұрын
I actually developed a Risc V processor at Uni (vhdl, run on an fpga). The modularity was pretty helpful
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Robert Rogers Talking of Risc V and doing other number types in software, has anyone built a properly optimized multi precision 9nteger library for it, without timing side channels? Because the lack of arithmetic flags and conditional execution has me worried this is an anti-security processor, compared to the MIPS, OpenSparc and ARM.
@ZintomV1
@ZintomV1 5 жыл бұрын
This is a really great video by Dr Bagley!
@merseyviking
@merseyviking 5 жыл бұрын
Love the Illuminatus! Trilogy / Robert Anton Wilson reference in the number 23. It's my go to number after 42.
@davesextraneousinformation9807
@davesextraneousinformation9807 5 жыл бұрын
Back in the days of TTL logic, I got to implement a design that used logarithmic numbers for a multi-tap Finite Impulse Response (FIR) filter. The number system we came up with to represent the logs was very much like a floating point number with an exponent and a mantissa. We had a radar signal to simulate, so there was a large dynamic range to handle. I think the input was a 12 bit unsigned number and we had something like 64 samples to multiply and accumulate. These were the days just before “large” multipliers were commonly available. That made using the logs an attractive solution. We used interpolation between 8 of the FIR weights to eliminate 56 multipliers, but still, how to accumulate the multiplication products? Enter the log adder. With some simple algebra, one can effectively add log numbers. Part of that process was linearizing the mantissa, shifting it according to its exponent, and adding that to the other number’s linearized mantissa. Then the result was normalized, the mantissa converted to a log and you have a sum. That experience piqued my interest in how video signals are handled, beginning in the CCD or CMOS sensor chip and on. In the years since, I have never come across anything other than a chip with a wider and wider integer output. I think some start-ups have promised wider dynamic ranges, but I don’t know what has come of it. Does anyone know of chips with anything other than integer digital outputs?
@procactus9109
@procactus9109 5 жыл бұрын
I may not understand, But would having an MCU with over a hundred outputs fit that, surly software can make those outputs mean anything you like ?
@davesextraneousinformation9807
@davesextraneousinformation9807 5 жыл бұрын
@@procactus9109 Well, there are many considerations to get an increased dynamic range. The first one is that the sensor analog performance has to have a higher accuracy and a lower noise floor. Next is the analog to digital converter must also have commensurate performance. No amount of software can generate information that is not in the signal to begin with. What I was musing about was whether the data bus width is increased linearly as better sensors are developed, or are there different number systems, like a floating point number system that is output from the A/D converter. My guess is that the outputs increase in width one bit at a time, since each new bit represents twice the previous dynamic range.
@procactus9109
@procactus9109 5 жыл бұрын
So you want to input a very high dynamic range with minimal pins, Logarithmic like? Ill keep that in mind, I'm interested in any kind of sensor, Though Ive not noticed any sensor that does that. But im sure something like that must be out there for a specific sensors. Ive seen some unusually high numbers on some of the new on chip sensors (MEMS ?). Just curious as far as ADC goes, How many bits do you dream of ?
@willynebula6193
@willynebula6193 5 жыл бұрын
I'm a bit lost!
@Soken50
@Soken50 5 жыл бұрын
Did you get carried away ?
@anisaitmessaoud6717
@anisaitmessaoud6717 3 жыл бұрын
Me too , i think it's not well explicated from the general public
@VascoCC95
@VascoCC95 3 жыл бұрын
I see what you did there
@yearlyoatmeal
@yearlyoatmeal 2 жыл бұрын
@@anisaitmessaoud6717 r/whoosh
@PaulPaulPaulson
@PaulPaulPaulson 5 жыл бұрын
When a third party DLL was running another third party DLL with code that exexuted in parallel that changed the FPU precision settings at seemingly random points in time, that was the hardest problem I ever had to debug. Looked like magic until i figured out the cause. Before, I didn't even expect those settings to be shared among DLLs.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Paul Paulson Of cause CPU registers are shared with DLLs. But I'm surprised no one told the DLL authors that the floating point settings need to be preserved across the library call boundary, just like (for example) the stack pointer.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
6:17 What’s missing is called “dynamic range”. Also note it’s not about large versus small numbers, but large versus small *magnitudes* of numbers. Remember that negative numbers are smaller than positives ones (and zero).
@rcookie5128
@rcookie5128 5 жыл бұрын
Thanks for the episode, really appreciate it!
@ABaumstumpf
@ABaumstumpf 5 жыл бұрын
Minecraft comes to mind - there you can quit easily notice the problem as the game has a rather large world. Specially in older versions - once you got a few thousand blocks away from the origin everything started to be a bit funcky cause distances were relative to the absolute world origin (instead of player or chunk centered). movement became stuttery and particles and not-full-block entities became distorted.
@glowingone1774
@glowingone1774 5 жыл бұрын
yeah on the mobile devices its possible to fall through blocks due to the error in position
@noxabellus
@noxabellus 5 жыл бұрын
I believe "a few thousand blocks" is an understatement... After checking, yes, it was after 16 *million* blocks from the origin, which still gave it a total unaffected area of 1,024,000,000 sq km total eg about double the surface area of Earth
@kellerkind6169
@kellerkind6169 5 жыл бұрын
Far Lands Or Bust
@gordonrichardson2972
@gordonrichardson2972 5 жыл бұрын
At 01:40 he talks about recompiling the program to use the floating point co-processor. When I was programming in Fortran in the 1990s, the compiler had an option to detect this at run-time. If the co-processor was present it would be used, otherwise an emulator software library would be used instead. The performance difference was notable, but it was easier to release a single program that was compatible with both.
@JeffreyLWhitledge
@JeffreyLWhitledge 5 жыл бұрын
When attempting to execute a floating-point processor instruction without the coprocessor installed, an exception (interrupt) would be raised. The handler for that interrupt would then perform the calculation via software emulation and then return. It was seamless, but the performance difference was huge.
@gordonrichardson2972
@gordonrichardson2972 5 жыл бұрын
Agreed (my memory is rusty). For testing, there was a flag during compilation, so that the emulator would execute the instructions in software as if the co-processor was never installed.
@mrlithium69
@mrlithium69 5 жыл бұрын
some compilers can do this.
@DaveWhoa
@DaveWhoa 5 жыл бұрын
cpuid
@vinsonwei1306
@vinsonwei1306 2 жыл бұрын
Holy Smoke! Didn't realize there're so many holes in the range of float32. Great video!
@angeldude101
@angeldude101 7 ай бұрын
There is a number system called the dyadic rationals, which form precisely 0% of all ℝeal numbers. Every single representable float value that isn't infinite or infinitesimal (floats don't actually have 0; they have positive and negative infinitesimals that they wrongly call "0"), even with arbitrary precision, is a dyadic rational, and with only finite memory, you're still missing most dyadic rationals anyways. (You do however get 16 million ways to write "error," which form 0.3% of all float values.) Specifically, the dyadic rationals are the integers "adjoined" with 1/2, so every sum and product formed from the integers with every power of 1/2.
@Smittel
@Smittel 5 жыл бұрын
"Its lossy but it doesnt really matter" *Minecraft Beta laughing 30.000.000 blocks away*
@teovinokur9362
@teovinokur9362 4 жыл бұрын
minecraft bedrock laughing 5000000 blocks away
@jecelassumpcaojr890
@jecelassumpcaojr890 5 жыл бұрын
As more and more transistors became available, the improvement in floating point hardware was greater than the improvement in the main processor (as impressive as that was). So the difference on a more modern machine would be a lot more than the 4 times of the late 1980s computer.
@jeffreyblack666
@jeffreyblack666 5 жыл бұрын
I'm disappointed that this doesn't actually go through how they work and instead just says how they store the bits.
@jeffreyblack666
@jeffreyblack666 5 жыл бұрын
@ebulating If that was the case then they wouldn't exist. The fact that they do exist means it can be explained.
@louiscloete3307
@louiscloete3307 5 жыл бұрын
@Jefferey Black I second!
@visualdragon
@visualdragon 5 жыл бұрын
@@jeffreyblack666 assume that @ebulating said "...too complicated to explain" in a 15 minute video on KZbin.
@jeffreyblack666
@jeffreyblack666 5 жыл бұрын
@@visualdragon Except now they have released a part 2 which goes over addition in 8 minutes (although I haven't yet watched it). They have now changed the title to something far more appropriate rather than the clickbait they had before.
@SebastianPerezG
@SebastianPerezG 5 жыл бұрын
I remember when tried run 3D Studio Release 4 on my 386 pc ask me for " you need a numeric coprocessor " , then my uncle have one and bring and install it. Old times ...
@todayonthebench
@todayonthebench 5 жыл бұрын
Floating point in short is a trade between resolution and dynamic range. If dynamic range is important, then floating point is a good option. (though, one can do this without floating point, but it gets fiddly...) If resolution is important, then integers are usually a better option. (and if any degree of rounding errors or miscounting leads to legal issues, then integers are usually the safe option. (ie banking software.))
@conkerconk3
@conkerconk3 2 жыл бұрын
In java, there exists the "BigDecimal" class, which is a much slower but more accurate way to represent decimal numbers, which is what one might use for banking i guess
@brahmcdude685
@brahmcdude685 3 жыл бұрын
really really terrific video. this should be taught in school - "over the ocean" [greta]
@damianocaprari6991
@damianocaprari6991 5 жыл бұрын
He did not really talk about the FPU though
@Milithryus
@Milithryus 5 жыл бұрын
Yes this video is exclusively about floating point representation. Being able to represent floating points, and computing operations on them are very different problems. Disappointing.
@wallythewall600
@wallythewall600 5 жыл бұрын
Simple enough. Take the numbers, and compare the exponents. You fix the smaller exponent to become the larger one and rewrite the mantissa to keep it the same value, then just add mantissas. Suppose you have a larger (in magnitude) floating point number and a smaller (again, in magnitude) floating point number. Say the exponent portion for the larger number is 5 ("101" in binary; keep in mind I won't be writing all the leading zeroes that would be in the actual floating point representation for simplicity's sake) and the smaller number's exponent is 4 ("011"). The difference between them is 1. Now, to do the exponent changing magic, all you need to do is shift the mantissa to the right over by the difference in the exponents. Consider the mantissa of the smaller number to be "1.011", where I included the decimal mark since I'm NOT considering the normalized part of the mantissa but also the unitary part. If you wanted to turn the 4 exponent into a 5 you shift the mantissa right once, and your mantissa becomes "0.1011". Check for yourself, "1.011" with exponent 4 is the same as "0.1011" with exponent 5. You can also check that if the difference were larger, you just keep shifting right by the difference in exponents, tacking on leading zeroes to the mantissa as needed. The problem is we only have a fixed amount of bits representing the mantissa. If you have to shift right and the last binary digit in the mantissa is a 1 and not just a trailing zero, when you shift right it just drops off (losing information/"precision" in the process). Now, if we have to shift right 24 times and we only have 23 binary digits for the mantissa... we end up storing just zeroes in the mantissa. I've gotten more unending loops in some of my programs by not keeping this in mind. Hardware wise, you need binary integer addition to find the difference in exponent bits, a right bit shifter for rewriting the mantissa to fit the new exponent, and again binary integer addition to ad the mantissas. You also need a few small things like registers to store information (you need to remember the largest exponent and the difference, for example) and some hardware to account when you add two numbers where the unitary part of the mantissa is 1 (you again need to shift around the mantissa and exponent to get back your final normalized representation) but it's not complicated to imagine how you could implement this.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Wally the Wall Still too basic. Things get complicated when you want to do maximum speed double floats on a 16 or 32 bit integer only CPU. His demo example must have done lots of other stuff for the speedup to only be about 4x .
@wallythewall600
@wallythewall600 5 жыл бұрын
@@johnfrancisdoe1563 Well, I explained it about as deeply as Computerphile would have. It's not like they were going to go into actual architecture design details, which I myself absolutely have no idea about.
@Cygnus0lor
@Cygnus0lor 5 жыл бұрын
Check out the next part
@SouravTechLabs
@SouravTechLabs 5 жыл бұрын
Excellent video. But I have a request! Can you add the 15:36 thumbnail preview videos link to the description? That will make life easier!
@TheToric
@TheToric 5 жыл бұрын
Im impressed that gcc still supports that architecture.
@digitalanthony7992
@digitalanthony7992 5 жыл бұрын
Literally just had a quiz on this stuff yesterday.
@slpk
@slpk 5 жыл бұрын
Wouldn't a gopro-like camera positioned at the top of the paper and filming in upside-down be better for the kinds of zoom you do? I would think they wouldn't get skewed like these ones do.
@Debraj1978
@Debraj1978 2 жыл бұрын
For someone used to fixed point, a simple "if" statement: if(a == b) will not work in floating point. Also, in general "if" statement calculation takes longer time in floating point.
@MattExzy
@MattExzy 5 жыл бұрын
It's one of my personal favourite units.
@marksykes8722
@marksykes8722 5 жыл бұрын
Still have a Weitek 3167 sitting around somewhere.
@treahblade
@treahblade 5 жыл бұрын
I actually ran this program on a 486DX which has a floating point unit on it. When adding 2 it does not solve the problem actually, at least on that processor. The end 2 numbers you get are 15,16,18,20,22. The wierd one here is the jump from 15 -> 16. it should go to 17 in hex its 4b7fffff and 4b800000
@DarshanSenTheComposer
@DarshanSenTheComposer 5 жыл бұрын
It's called *QUICK MAFFS* !!!
@billoddy5637
@billoddy5637 5 жыл бұрын
int var; var = 2 + 2; printf("2 + 2 is %d ", var); var = var - 1; printf("- 1 that’s %d ", var); printf("QUICK MAFFS! ");
@ExplicableCashew
@ExplicableCashew 5 жыл бұрын
@@billoddy5637 man = Man(kind="everyday", loc="block") man.smoke("trees")
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
15:34 Ah, but it could make a difference if you are trying to do very long baseline interferometry. For example, setting up a future radio telescope array. Maybe call it the “Square Light-Year Array”.
@Lightn0x
@Lightn0x 5 жыл бұрын
An observation that I don't think was mentioned: the leading digit is not necessarily 1. When the exponent is minimal, the digit is treated as 0 (i.e. it's no longer 1.[x]*2^y, but rather 0.[x]*2^y).
@lostwizard
@lostwizard 5 жыл бұрын
That only applies to special cases that are not normalized (which, in my not so humble opintion, are a misfeature of common floating point representations). In a properly normalized floating point number, the only possible value that doesn't have a leading 1 is zero.
@Lightn0x
@Lightn0x 5 жыл бұрын
@@lostwizard Maybe so, but the IEEE 754 standard (which this video describes and which all modern CPUs use) operates this way. Also, you call it a misfeature, but it does have its advantages (for example, it allows for more precise representations of very small numbers). Trust me, many minds more bright than yours or mine have thought out this standard, and if they thought this special case was worth implementing, they probably had their reasons :)
@lostwizard
@lostwizard 5 жыл бұрын
@@Lightn0x Sure. I've even read the reasoning for it. I just don't agree that everyone should be saddled with it because five people have a use for it. (Note: hyperbole) I'm sure it doesn't cause much trouble for hardware implementations other than increasing the die real estate but it does make software implementations more "interesting" if they need to handle everything. Any road, I wouldn't throw out IEEE 754 just because I think they done goofed. :)
@tamasdemjen4242
@tamasdemjen4242 5 жыл бұрын
It's to prevent division by 0 due to underflow. Assume `a` and `b` are different numbers, but so close to each other that `a - b` would give a result of 0. That's called an underflow. Then 1 / (a - b) would cause a division by zero, even though `a` is not equal to `b`. Denormal (or subnormal) numbers guarantee that additions and subtractions cannot underflow. So if a != b, then a - b != 0. Yes, it requires extra logic in the hardware. Also, there are two zeros, positive zero, and negative zero. He couldn't possibly mention everything in a short video. There's a document called "What every computer scientist should know about floating-point arithmetic". It's 44 pages and VERY intense.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Tamas Demjen Seems like a short summary document of the hype variety. IEEE 754 representation became extremely popular due to Intel's hardware implementation, but like any design it has it's quirks and implementation variations. Other floating point formats do exist and have different error characteristics. Some don't have negative 0, many don't have the NaN concept (slows down emulation), most use different exponent encoding and binary layout. For example Turbo Pascal for x86 without coprocessor had a 6 byte real type. Texas 99/4 used a base 100 floating point format to get a 1:1 mapping to decimal notation. Each mainframe and traditional supercomputer brand had it's own format too.
@TheJaguar1983
@TheJaguar1983 5 жыл бұрын
When I started using Pascal, my programs would crash when I enabled floating point. Took me ages to realise that my 386 didn't have an FPU. I was probably about 10 at the time.
@avi12
@avi12 5 жыл бұрын
I love geeky videos like this one!
@nietschecrossout550
@nietschecrossout550 5 жыл бұрын
IEEE754, float 128 Is there a way to chain together two doubles (float 64) in order to emulate a float with a 104bit mantissa?
@nietschecrossout550
@nietschecrossout550 5 жыл бұрын
I guess that a double-double would be faster then a [Intel] long double or the GCC __float128 implementation, as there is actual hardware support for 64bit floats
@peterjohnson9438
@peterjohnson9438 5 жыл бұрын
There's some hardware with support for 128 bit float, but it isn't a standard feature, and you can't really force a vector unit to treat two 64 bit floats as a single value due to the bit allocation patterns being physically wired into the hardware. You're better off rethinking your algorithm. [edit: standardized -> a standard feature to avoid confusion.]
@nietschecrossout550
@nietschecrossout550 5 жыл бұрын
@@peterjohnson9438 Even if emulating something close to a 128bit float would require 4 or more double operations it would still be faster then all sw implementations, plus it mostly puts load on the FPUs instead of being a generic load. Therefore it seems to me (with my limited knowledge) that a hw based implementation is more desirable then a pure sw float128 according to IEEE754. As far as I know 128bit FPUs are very very scarce and expensive, therefore mostly undesirable because - unless you're running a purpose built data center - your not going to use f128 very often. Making a solution using multiple f64s even more desirable. I don't know how such an implementation would look like, though it will have multiple buffer f64s for sure. edit: replaced >>most>all
@ABaumstumpf
@ABaumstumpf 5 жыл бұрын
quad precision in hardware is really rare as it is hardly ever used and using the compiler specific implementations is sufficient for most scenarios. And you will not manage to get better general performance with self-made constructs - they are already based on what the hardware can deliver.
@sundhaug92
@sundhaug92 5 жыл бұрын
@@peterjohnson9438 128 bit float is standardized, it's part of IEEE 754, it's just not common in consumer hardware
@Jaksary
@Jaksary 5 жыл бұрын
PLEASE, can you look at a program like the spinning cube in more detail? On another channel maybe? I'm interested in the details, thanks! :)
@Gengh13
@Gengh13 5 жыл бұрын
You should start using the KZbin feature to link previous videos(the i in the corner), it's handy.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Genghisnico13 Links in description are much better. In-video links have been abused to death by VEVO.
@Gengh13
@Gengh13 5 жыл бұрын
@@johnfrancisdoe1563 either works for me, unfortunately none are present.
@kooky216
@kooky216 5 жыл бұрын
2:51 back when the camera would shoot a right-handed writer from the left side, the good old days ;)
@GogiRegion
@GogiRegion 5 жыл бұрын
I don’t think I ever had the problem where an equals equivalency test didn’t work with floating point numbers because I’ve never actually used that in an actual program. I’m curious in what kind of program would that actually be used.
@visualdragon
@visualdragon 5 жыл бұрын
Assume you have a vessel of some sort that you know holds x units of something and you have a program for monitoring the level in that vessel. You now start to fill that vessel and every time you add 1 unit you check to see if the current level equals the max level. It is possible when using floats or doubles that the test of maxCapacity == currentLevel will fail even when they are "equal" and then there's gonna be a big mess and somebody is going to get fired. :)
@hymnsfordisco
@hymnsfordisco 5 жыл бұрын
So does this mean the smallest possible positive number, 1*2^-127, would then have the same representation as 0? That seems like a very nice way to move the problem number to the least important possible value (at least in terms of making it distinct from 0)
Жыл бұрын
By the definition, a nonzero number can only go to 2^-126, so 2^-127 is not representable as a normal. On FPUs that that allow denormals (x86 does), it would be converted to that (wikipedia has an article on subnormals), otherwise an underflow exception would be raised and the number would be rounded to zero.
@dipi71
@dipi71 5 жыл бұрын
13:01 if you declare your main function to return an int, you should actually return an integer value. Just saying.
@_tonypacheco
@_tonypacheco 5 жыл бұрын
Doesn't matter, it'll return 0 dy default which indicates a successful run anyways
@dipi71
@dipi71 5 жыл бұрын
@@_tonypacheco It’s a bad default from C’s early days, it works only for main(), and compiler flags like the highly reocmmended »-Wall« will warn you. Just return your state properly.
@9999rav
@9999rav 5 жыл бұрын
@@dipi71 in C++ return is not needed in main(). And you will get no warnings, as it is defined in the standard
@ExplicableCashew
@ExplicableCashew 5 жыл бұрын
Today I realized that 42 is "Lololo" in binary
@KnakuanaRka
@KnakuanaRka 2 жыл бұрын
65 = 5*13 = 101 x D = lol xD
@EllipticGeometry
@EllipticGeometry 5 жыл бұрын
I wouldn’t say floating point is any more lossy than fixed point or an integer. They all have their own way to lose precision and overflow. If you use arbitrary-precision math, you can get really unwieldy numbers or even be forced to use symbolic representations if you want something like sin(1) to be exact. It’s really about choosing a representation that suits your needs. By the way, floating point is excellent in 3D graphics. Positions need to be more precise the closer they are to the camera, because the projection magnifies them. Floating point is ideal for storing that difference from the camera. I suspect the rasterization hardware in modern GPUs lives on the boundary between fixed and floating point, with things like shared exponents to get the most useful combination of properties.
@deckluck372
@deckluck372 5 жыл бұрын
At the end "maybe I should have done sixteen bit numbers." 😂
@davesextraneousinformation9807
@davesextraneousinformation9807 5 жыл бұрын
Oh, I almost forgot! I wanted to ask how computers calculate ridiculously large numbers like Pi to the infinitesimal decimal point. How do they do that? Of course I want to know the inverse of that, how do they calculate all those primes and stuff that are so huge. That sounds like a great Computerphile or Numberphile topic.
@quintrankid8045
@quintrankid8045 5 жыл бұрын
Search for Arbitrary Precision Arithmetic and/or bignum.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Richard Vaughn For special jobs like digits of Pi, there are algorithms that don't need a billion digits type.
@davesextraneousinformation9807
@davesextraneousinformation9807 5 жыл бұрын
Thanks for the info, guys!
@jakeshomer1990
@jakeshomer1990 5 жыл бұрын
Can we get your code for this program?? Great vide btw!!!
@Architector_4
@Architector_4 5 жыл бұрын
The whole program except the last closing curly bracket is visible at 12:36, you can just write it out from your screen
@jakeshomer1990
@jakeshomer1990 5 жыл бұрын
@Icar-us Sorry I meant the code for the spinning cube
@brahmcdude685
@brahmcdude685 3 жыл бұрын
Just a thought: why not place a second top camera looking straight down onto the written paper?
@gravity4606
@gravity4606 5 жыл бұрын
I like 2^4 power notation as well. easier to read.
@egonkirchof
@egonkirchof Ай бұрын
What if they represent it with fractions and only generate a decimal number if needed for printing it ?
@g33xzi11a
@g33xzi11a Ай бұрын
Yes. This is a tactic that we use sometimes where we store the numerator and denominator separately and only combine them later. The problem is that division is very very slow in computers relative to multiplication and adding. It’s worth saying that what you’re thinking of is not a decimal. It’s a decimal fraction. It’s already a fraction. The denominator of that fraction is always known given the length of the numerator so we don’t need to write the denominator. In base 10 these fraction denominators are powers of ten. In base 2 these fraction denominators are powers of 2. Binary numbers containing a fractional component are not decimals even if you separate the whole number component from the fractional component with your preferred indicator of that separation (like a period/point/fullstop). It’s still binary. It’s just a binary fraction. The reason you sometimes see the decimal fraction names in shorthand as “a decimal” is because in English we had a long history of computation on fractions using geometry rather than a positional number system like decimal and the number symbols we use are older than our use of decimal positional math. So for many English speakers the idea of numbers or fractions was distinct from positional numbering systems like decimal and one of the most interesting concepts to them would have been the ability to represent a fractional component in-line with the whole number component hence conflating decimal with decimal fraction like you are. But no. Decimal fractions are just fractions with a couple of conveniences baked in.
@AliAbbas-of2vq
@AliAbbas-of2vq 5 жыл бұрын
The guy resembles a young Phillip Seymour Hoffman.
@sebastianpochert4511
@sebastianpochert4511 5 жыл бұрын
Indeed. I thought the same.
@MarkStead
@MarkStead 5 жыл бұрын
Yeah I used fixed-point when coding 3D graphics on a z80.
@ryanbmd7988
@ryanbmd7988 5 жыл бұрын
Could a 1040ST get a fpu upgrade? What magic does he use to get the Atari video to lcd?!?
@DanielDupriest
@DanielDupriest 5 жыл бұрын
6:37 I've never seen a video that was corrected for skew before!
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Daniel Dupriest The wobbling is the actual paper being wobbly because someone stored it folded sideways.
@benjaminbrady2385
@benjaminbrady2385 5 жыл бұрын
8:43 when you prove someone wrong
@Concentrum
@Concentrum 5 жыл бұрын
what is this editing sorcery at 7:19?
@wmd5645
@wmd5645 5 жыл бұрын
I had to dig back into these topics recently and Java was really being bad. DIP class using java..... no unsigned int directly supported, unless casted and the values still weren’t 0-255 they kept coming out to 0-127. Tried to use char like in C but it proved to be difficult. The .raw image file had a max pixel value less than 255, I think around 230ish. Anyone know why using Java’s tounsigned int/short/long still wanted to truncate my pixel values?
@thepi4587
@thepi4587 5 жыл бұрын
It wouldn't be Java's byte being signed by default tripping you up, would it?
@wmd5645
@wmd5645 5 жыл бұрын
@@thepi4587 could be bc that's how I've read in the file. But I've casted the values after. Still the same result.
@thepi4587
@thepi4587 5 жыл бұрын
I don't really use Java myself, I just remember reading about this same problem before at one point. I think the answer ended up being to just not use bytes and stick with a larger primitive that could handle 0-255 instead.
@willriley9316
@willriley9316 2 жыл бұрын
It is difficult to follow your explanation because the camera keeps switching around. It would be helpful to maintain a visual perspective throughout your verbal explanation.
@OlafDoschke
@OlafDoschke 5 жыл бұрын
2^(-53)+2^(-53)+1 vs 2^(-53)+1+2^(-53)
@charlescox290
@charlescox290 5 жыл бұрын
I cannot believe a college professor just assigned an int to a float without a cast. That's a big no-no, and I'm surprised GCC didn't pop a warning. Or do you ignore your warnings?
@pcuser80
@pcuser80 5 жыл бұрын
My slow 8088 pc with a 8087 was much faster with Lotus123 than a 80286 AT pc.
@ataksnajpera
@ataksnajpera 5 жыл бұрын
than not then ,german.
@sundhaug92
@sundhaug92 5 жыл бұрын
x87 is kinda interesting, because while it supports IEEE 754 doubles, it actually uses a custom 80-bit format internally
@gordonrichardson2972
@gordonrichardson2972 5 жыл бұрын
Yeah, mainly for rounding and transcendental functions, to limit inaccuracty. Studied that way back in the 1980s.
@pcuser80
@pcuser80 5 жыл бұрын
@@ataksnajpera corrected
@Cashman9111
@Cashman9111 5 жыл бұрын
6:25 wohohohooo!!!... that was... quick
@VADemon
@VADemon 5 жыл бұрын
Original video title: "Floating Point Processors - Computerphile"
@aparnabalamurugan4444
@aparnabalamurugan4444 3 жыл бұрын
why 127 is added? I don't get it.
@chaoslab
@chaoslab 5 жыл бұрын
Thanks! :-)
@dendritedigital2430
@dendritedigital2430 2 жыл бұрын
I don't know why computers can't sort this out on a basic level? Either leave it in factional form or use a number system that has the factors you are using. Behind the scene you could have a base like 16 * 9 * 49 * 11 * 13 = 1009008 and get an integer value that is exactly right. It would be close to the metric system for computers ( 1024 ). Example: 200 / 3 = 200 * 1009008 / 3 = 67267200 / 1009008 = 66.666666666666...
@g33xzi11a
@g33xzi11a Ай бұрын
Computers can’t sort this out on a basic level because they are binary as a physical constraint. Transistors are designed to be powered on or off and nothing else for a variety of practical reasons related to electrical engineering, manufacturing and fabrications, and an at this point deeply entrenched system of programming with the assumption that computers work in binary built from the ground up through every single level of abstraction. To do what you’re suggesting we would need a large number of transistors each of which has some prime number of variable states it can be in at exactly one of at any given time (and able to exactly jump between the states by the time they are next observed without accidentally being detected on their way to the state we hope to see them in. This would then need to be coordinated with other logic modules that have no idea what’s going on and are probably still using base two because outside of specialize hardware used for this (and maybe some cryptographic processes) these super specialized ultra expensive transistors would be useless for the general purposes of the computer. Meanwhile a floating point number in binary can be added and multiples using general purpose logical adders and multipliers that also work on binary integers there’s no conversion layer to put it back in a form every other part of the system agree on. All of this also ignores that division and finding prime factors are notoriously very very slow algorithms in any number base compared to addition and multiplication which are very fast. There’s just no reason at all to do what you’re suggesting at a fundamental level. There are specialized code libraries for handling numbers that need to be precise and these usually end up doing something like just storing values for the explicit numerator and normally implicit denominator separately and then performing the math only on the whole numbers until you need to finalize at which point it does the slow division were otherwise trying to avoid. Even these libraries eventually make cutoffs for rounding though because they don’t have infinite space and would break immediately if given an irrational number like pi to calculate in earnest.
@Jtretta
@Jtretta 5 жыл бұрын
0:15 And then you have AMD cpu cores that share a singular fpu between two execution units and still call the arrangement two full cores. What a silly idea in my opinion; in addition to their generally subpar ipc, the fpu had to be shared by the two "cores" which reduced performance.
@johnfrancisdoe1563
@johnfrancisdoe1563 5 жыл бұрын
Jtretta Maybe some of their IPC loss was from not being as reckless with speculative execution as Intel.
@halistinejenkins5289
@halistinejenkins5289 5 жыл бұрын
when i see the English version of Ric Flair in the thumbnails, i click
@RayanMADAO
@RayanMADAO Жыл бұрын
I don't understand why the float couldn't add 1
@angeldude101
@angeldude101 7 ай бұрын
Try doing 1.000 * 10^6 + 1. Expanded it becomes 1 000 000 + 1= 1 000 001 Convert to scientific notation: 1.000001 * 10^6, however we only have finite space to store values, so we have to round to only 4 significant digits like we started with, giving 1.000 * 10^6... Wait, didn't we just add 1? Where'd the 1 go‽ The exact same place as the 1 that got lost when adding to a float that's too big: it could rounded off.
@cmdlp4178
@cmdlp4178 5 жыл бұрын
To the topic of floating point numbers, there should be a video of the inverse square root hack in the quake source code. And I would like to see videos about other bit-hacks.
@rallokkcaz
@rallokkcaz 5 жыл бұрын
cmdLP problem with the quake rsqrt is that it's actually almost unknown who wrote it, and most resources describing how it was designed are also just guessing for the most part. Thank you miscellaneous SGI developer for that wonderful solution to one of the most complex problems in computational mathematics.
5 жыл бұрын
"Popular auction site beginning with the letter e" Which could it be? :D
@grainfrizz
@grainfrizz 5 жыл бұрын
0.1st
@justinjustin7224
@justinjustin7224 5 жыл бұрын
Dustin Boyd No, they’re obviously saying they’re 1/16 of the way to the first comment.
@okboing
@okboing 3 жыл бұрын
One way you can find out what number size your computer uses (32 bit, 64 bit) is to write in your calculator 16777216 + 0.125. If it returns 16777216 without a decimal, your machine uses 32 bit. Otherwise the equation will return 16777216.125 and your machine will be 64* bit
@peNdantry
@peNdantry 3 жыл бұрын
Non sequitur. Your facts are uncoordinated. Sterilise! Sterilise!
@okboing
@okboing 3 жыл бұрын
@@peNdantry huh
@peNdantry
@peNdantry 3 жыл бұрын
@@okboing No fair! You edited it! Cheater cheater pumpkin eater!
@okboing
@okboing 3 жыл бұрын
@@peNdantry if I wasn't posta edit it there wouldnt be an edit button
@peNdantry
@peNdantry 3 жыл бұрын
@@okboing I have no clue what you're saying
@danieljensen2626
@danieljensen2626 5 жыл бұрын
Probably worth mentioning that even in modern systems fixed point is still faster if you can use it. Real time digital signal processing often still uses fixed point if it's operating at really high sample rates.
@shifter65
@shifter65 5 жыл бұрын
Is the fixed point done in software or is it supported by hardware?
@danieljensen2626
@danieljensen2626 5 жыл бұрын
@@shifter65 Hardware. Doing it just with software wouldn't be any faster, but with hardware support you save several steps with each operation because you don't need to worry about exponents, just straight binary addition. Many digital signal processing boards don't even support floating point.
@shifter65
@shifter65 5 жыл бұрын
@@danieljensen2626 I was wondering with regards to CPUs (sorry for not being clear). The previous comment mentions that modern CPUs use fixed point for some processes. Is there a hardware equivalent to the FPU in modern CPUs to do these tasks? For DSP I would imagine since the hardware is custom that the fixed point would be incorporated, but curious about general purpose computers.
@danieljensen2626
@danieljensen2626 5 жыл бұрын
@@shifter65 Ah, yeah, I don't actually know, but my guess would be yes.
@MrGencyExit64
@MrGencyExit64 5 жыл бұрын
@@tripplefives1402 Floating-pont also has additional exceptional cases. Division by zero is undefined in integer math, but usually well-defined by anything calling itself floating-point.
@brahmcdude685
@brahmcdude685 3 жыл бұрын
Also: please make sure the sharpie has ink :(
@BurnabyAlex
@BurnabyAlex 5 жыл бұрын
Google says Alpha Centauri is 4.132 × 10^19 mm away
@silkwesir1444
@silkwesir1444 5 жыл бұрын
42 + 23
@VincentRiviere
@VincentRiviere 5 жыл бұрын
Which GCC version did you use to compile your cube program?
@Goodvvine
@Goodvvine 5 жыл бұрын
ha, the final clip 🤣
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
12:48 C-language trivia question: are the parentheses in “(1
@yesgood1357
@yesgood1357 3 жыл бұрын
you really should do a proper animation of what is going on. I don't really get what he is saying.
@GH-oi2jf
@GH-oi2jf 5 жыл бұрын
He gets off to a bad start. The alternative to “floating point” is not “integer” but “fixed point.” It would be better if he got that right at the beginning.
@amalirfan
@amalirfan 2 жыл бұрын
I love binary math, never made me do 14 x 7, decimal math is hard.
@HPD1171
@HPD1171 5 жыл бұрын
why did you use *((int*)&y) instead of just using a union with a float and int type?
@frankynakamoto2308
@frankynakamoto2308 5 жыл бұрын
Why not have memory inside the cores, and have floating points which are conversions already installed in them, so it uses the after cache to do less conversions, it would read data much better.
@frankynakamoto2308
@frankynakamoto2308 5 жыл бұрын
@@tripplefives1402 "The CPU doesnot convert floating point numbers as they are already stored in that format" Why did you think I wrote the feedback for????? The CPU doesnot convert floating points. Well is because is not design to convert floating points, that is why they need to make more sophisticated cpus that can store floating point values and also process floating point values, so they can be much more accurate and not require more software programs for it, because is just a waste of time to be making software that can do so, when the CPU can be design to do it even better than software. You not very smart man understand what other people write.
@frankynakamoto2308
@frankynakamoto2308 5 жыл бұрын
@@tripplefives1402 is more efficient for hardware to perform floating points and additional tasks, than having to rely more on software, why make more software that is not needed when they can integrate it into the hardware, just to save a couple of bucks? That makes no sense, now a days hardware technology is very inexpensive to where they can integrate more functionalities and rely less on making more software that the hardware can make. Doesnot make sense, because even if you have the software handle he said it himself that is far superior for the hardware to handle it.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
5:22 It’s just easier to say “two to the sixteen”.
@trueriver1950
@trueriver1950 5 жыл бұрын
COBOL (a business language) had fix point numbers that were not integers, used them for money. So a pound or dollar could be stored as a value with a fixed 2decimals of fractional part. Unlike spreadsheets nowadays, you could do calculations on money values and get exact answers without rounding errors. More exactly by specifying the number type, you knew exactly how the rounding would be applied at each step in the calculation. We lost that somewhere along the way....
@SimGunther
@SimGunther 5 жыл бұрын
Compilers + Computerphile == comprehensible video >>> False Can we please just have ONE compiler video series that matches the calibre of the rest of the catalogue? PLEASE???
@EliA-mm7ul
@EliA-mm7ul 5 жыл бұрын
This guy is stuck in the 80s
@tsunghan_yu
@tsunghan_yu 5 жыл бұрын
Can someone explain *((int*)&y) ?
@Simon8162
@Simon8162 5 жыл бұрын
It casts the address of `y` to an int pointer. Remember y is a float, so by creating an int pointer to it you end up treating it like an int. Presumably int and float are the same size on that machine. Then the int pointer is dereferenced. The value of the int will be the IEEE representation of the float, which is passed to printf.
@tiarkrezar
@tiarkrezar 5 жыл бұрын
He wanted to see the actual bit pattern of the float as it's stored in memory, this is a roundabout way to cast the float to an int without doing any type conversion because C doesn't offer an easy way to do that otherwise. Just doing (int) y would give a different result.
@jecelassumpcaojr890
@jecelassumpcaojr890 5 жыл бұрын
@@tiarkrezar the proper C way to see the bits without conversion is a union, which is like a struct but the different components take up the same space instead of being stored one after the other.
@tiarkrezar
@tiarkrezar 5 жыл бұрын
@@jecelassumpcaojr890 Yes, but it's still kind of awkward to define a throwaway union for a one time use like this, that's why the pointer mangling way comes in handy. For this exact reason, I kind of wish printf had a "raw data" format specifier that would just print out the actual contents of the memory without worrying about types.
@Vinxian1
@Vinxian1 5 жыл бұрын
&y gets the memory address (pointer) of the float. (int*) tells the compiler that it should threat &y as a pointer to an integer. The final * says fetch the value stored in this memory location. So *(int*)&y gives you an integer whoms value corresponds with the binary representation of the float. This is helpful if you need to store a float in EEPROM, or like in this video, want to print the hexadecimal representation with "%X"
@elimalinsky7069
@elimalinsky7069 5 жыл бұрын
But why not use 64-bit floating point numbers? Isn't that how you solve the problem of some precision being lost?
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
We already are using 64 bits.
@angeldude101
@angeldude101 7 ай бұрын
Now you're using twice as much memory and extra computation just to push the issue back a little. You don't _solve_ anything. You only made it so you could get away without solving it for longer. You also gained several orders of magnitude more dynamic range than you're probably ever going to use.
@TheDuckofDoom.
@TheDuckofDoom. 5 жыл бұрын
Only 4x speed increase? Certainly significant and worthwhile for a few special tasks, but in the big picture it hardly seems worth the extra hardware. I was expecting something over an order of magnitude, considering the difference between interpreted Python and pre-compiled c++ is almost two orders.
@hymnsfordisco
@hymnsfordisco 5 жыл бұрын
That's really not a fair comparison to expect similar results. The way python implements floating point math relies on whatever the underlying machine implementation is, so the amount time it takes python to add 2 floats together is technically the same as any c++ program on the same machine, just it will be slowed down by the many other operations python does to set up the add because of the way the language is structured to allow more versatility
@frankharr9466
@frankharr9466 5 жыл бұрын
Oh, this reminds me of when I made my app.
@martinbakker7615
@martinbakker7615 5 жыл бұрын
what's he saying at 0:08 "bloody point pressures" ???
@luckyluckydog123
@luckyluckydog123 5 жыл бұрын
and one of the things we talked about was "floating point processors"
@nakitumizajashi4047
@nakitumizajashi4047 5 жыл бұрын
1.0 != 1
@aopstoar4842
@aopstoar4842 5 жыл бұрын
He said "auction site" and it makes me so proud. What a scientist and objectivist! I do not like floating point digital because it is not exact. It is in secret rounding things off in the background. 0,1 is not 0,1 but something else. There are several talks on this site and elsewhere addressing this obvious fallacy in the computer world. Bad computers, very bad computers.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 5 жыл бұрын
Feel free to come up with a solution.
Жыл бұрын
Floating-point is designed for engineering purposes, and works really well for that. As you know, engineers round all the time. But if you want to track money precisely, you typically use either 1) integers, in units of cents or whatever you need, or 2) BCD (binary coded decimal) numbers. Some languages have builtin support of that kind of thing, for example C# has a type called "decimal", and I think I've heard COBOL also has something. Such numbers are mostly implemented in software and are thus typically slower than FP, but can be useful if you need to track base-10 decimals exactly, without unexpected rounding.
@davidho1258
@davidho1258 4 жыл бұрын
simple program = 3d spinning cube :/
@kimanih617
@kimanih617 5 жыл бұрын
Too early, snoozing
@umeng2002
@umeng2002 5 жыл бұрын
Yeah, but how many RTX ops can it do?
Floating Point Numbers (Part2: Fp Addition) - Computerphile
8:09
Computerphile
Рет қаралды 53 М.
Floating Point Numbers - Computerphile
9:16
Computerphile
Рет қаралды 2,3 МЛН
I MADE A CARDBOARD SWING!#asmr
00:40
HAYATAKU はやたく
Рет қаралды 26 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
How to open a can? 🤪 lifehack
00:25
Mr.Clabik - Friends
Рет қаралды 12 МЛН
What Happens When I Press a Key? - Computerphile
12:29
Computerphile
Рет қаралды 242 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 262 М.
I built my own 16-Bit CPU in Excel
16:28
Inkbox
Рет қаралды 1,3 МЛН
how floating point works
17:48
jan Misali
Рет қаралды 351 М.
I designed my own 8-bit computer just to play PONG
17:19
10 Math Concepts for Programmers
9:32
Fireship
Рет қаралды 1,7 МЛН
Fixed Point Maths Explained - Retro Programming
17:31
NCOT Technology
Рет қаралды 1,9 М.
The basics of BASIC, the programming language of the 1980s.
24:07
The 8-Bit Guy
Рет қаралды 2,3 МЛН
I MADE A CARDBOARD SWING!#asmr
00:40
HAYATAKU はやたく
Рет қаралды 26 МЛН