Floating Point Numbers (Part2: Fp Addition) - Computerphile

  Рет қаралды 53,942

Computerphile

Computerphile

5 жыл бұрын

Continuation of Dr Bagley's explanation of Floating Point Numbers: • Floating Point Numbers...
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 70
@StefanH
@StefanH 5 жыл бұрын
I'm quite surprised there is no video on regular expressions yet. Would love one about the history of it and why it is so cryptic
@jamegumb7298
@jamegumb7298 4 жыл бұрын
There are different types of regex. Do not forget about that part.
@amkhrjee
@amkhrjee Жыл бұрын
The animations on every computerphile video is the most underrated yet the one of the most important components. They make the explanations way more easier to grasp by visually explaining what the speaker wants to convey. On this video, especially, the animations were so on point!
@robspiess
@robspiess 5 жыл бұрын
I find the easiest way to learn about floating point is via 8-bit floating point. While impractical for actual use, it's helpful to be able to actually see the whole domain. There's a PDF by Dr. William T. Verts which lists a value for each of the 256 combinations.
@cmscoby
@cmscoby 5 жыл бұрын
Thank you for explicitly covering this topic. Better than anything else I've found online.
@V1ruzZW2G
@V1ruzZW2G 5 жыл бұрын
Did anyone notice that he wrote the first two 0 on the table at 3:35 :D
@kaustubhmurumkar2670
@kaustubhmurumkar2670 5 жыл бұрын
Computerphile is so underrated!!
@thiswasleft27
@thiswasleft27 5 жыл бұрын
Very informative! Thank you for this explanation.
@cacheman
@cacheman 5 жыл бұрын
I can't remember if they've done one on radix sorting, but understanding the representational bit-pattern of floats is very helpful to being able to sort them with that familiy of algorithms.
@antonf.9278
@antonf.9278 5 ай бұрын
Radix sort is designed around integers but positive floats have the same ordering as ints and can therefore be treated as such for sorting purposes.
@cacheman
@cacheman 5 ай бұрын
@@antonf.9278 Only one of the two major classes of radix sorts, Least-Significant-Digit/Bit, could correctly be classified as "designed around integers". I'm not sure what to make of the rest of your comment; you're only confirming the usefulness of understanding the bit-representation, because without this knowledge, you would not be able to prove your assertion... except by exhaustive testing I guess.
@valuedhumanoid6574
@valuedhumanoid6574 5 жыл бұрын
These guys are so good with explaining things to us not so smart people. Well done mate.
@BGBTech
@BGBTech 5 жыл бұрын
FWIW: I did an FPU for an experimental CPU core I was working on (targeting an FPGA). It normally works with Double, but only has an ~64-bit intermediate mantissa (for FADD), and this was mostly because the FADD unit was also being used for Int64->Float conversion (reusing the same normalizer; otherwise it could have been narrower). The rest of the bits just "fell off the bottom". Similar goes for FMUL, which only produced a 54-bit intermediate result (with a little bit-twiddly mostly to fix up rounding). Similarly: FDIV was done in software; rounding was hard-coded; it used "denomal as zero" behavior; ... Most of this was to make it more affordable (if albeit not strictly IEEE conformant; most code wouldn't notice).
@lisamariefan
@lisamariefan 5 жыл бұрын
The explanation is nice and explains why floats are coarse like they are.
@RossMcgowanMaths
@RossMcgowanMaths 2 жыл бұрын
Fascinating subject. I have simulated 32 bit floating point addition , subtraction , multiplication in excel vba then built the 'circuits' in logisim. Implementing rounding , subnormals , special values then testing is quite involved and can really waste a lot of time. I chased 1s and 0s for months. My coding skills are a basic but got things working well ( I think ?)To comprehend it mathematically first is the way forward.
@matsv201
@matsv201 5 жыл бұрын
There have been quite a few processors historically where the fpu cheated not having the full 48 bit needed but really going for something much smaller than say 36 or 38 bits. Rounding of the last once. People that made software, specially in the 90-tys had to be very careful with this not trusting it to much. This was also one reason why 64 bit become very popular. Even if you do cheat. It becomes more accurate anyway. Sadly this is quite common to this day software developers run 64 bit when it's really not needed. This is specially problematic with gpu acceleration that for some cards emulate 64 bit, running much slower than half speed. Also worth saying. 16 bit floating ponit is actually quite a bit more accurate than people think . And twice as fast on most modern cpu and some moden gpu;s. There even exist 8 bit floating points. Four times as fast. While they are really inaccurate and have a very slim range. When they can be used the preformance gane is huge
@luelou8464
@luelou8464 5 жыл бұрын
Surely you'd be better off with fixed point for 8 bit values.
@godarklight
@godarklight 5 жыл бұрын
I'm probably wrong, but isn't the hardware FPU for x64 a fixed size bigger than a 64bit double? (Like 80 or 96 or something)? Someone once tried to tell me that 32bit floats had to be cast and were slower than native 64bit float stuff
@tanveerhasan2382
@tanveerhasan2382 5 жыл бұрын
@@godarklight regarding the the last part, maybe thats why most programming language treat decimals as 64 bit double precision instead of single precision by default? because as you said, using single precision is actually detrimental to the performance?
@Adamarla
@Adamarla 5 жыл бұрын
You did the "Double Dabble" video to explain going from bit representation to a string. Could you do a video explaining how to do it for floating point?
@alen7648
@alen7648 5 жыл бұрын
Can you do a video about: rounding and rounding-errors?
@ryananderson8817
@ryananderson8817 5 жыл бұрын
My machine organization class is doing this as an assignment right now thank you
@PoluxYT
@PoluxYT 5 жыл бұрын
Machine organization is a neat name for the subject. Mine is called "Computer organization".
@JaccovanSchaik
@JaccovanSchaik 5 жыл бұрын
Multiplication isn't really simpler for floats though, because multiplying the mantissas for floats is pretty much the same as multiplying two integers. It's just that the extra step (adding the exponents) is almost trivial.
@Para199x
@Para199x 5 жыл бұрын
I think the point was that it was (at least conceptually) simpler than addition of floats. Not that multiplying floats is easier than integers
@RossMcgowanMaths
@RossMcgowanMaths 2 жыл бұрын
All floating point operations are conceptually simple for simple cases but add in subnormal numbers , rounding and special cases then testing for errors and you will soon understand the complexities.
@enantiodromia
@enantiodromia 5 жыл бұрын
Amazing... A Computerphile video that uses pen and paper to visualize addition, and not a nice CGI... In the year 2019...
@JakubH
@JakubH 5 жыл бұрын
what about infinities and NaNs? will there be another video?
@kc9scott
@kc9scott 5 жыл бұрын
Yes, something on that topic would be interesting. While from the standpoint of using FP numbers, Inf and Nan are really nice to have, I imagine that they add a lot of special-case checking into the FP implementation.
@totlyepic
@totlyepic 5 жыл бұрын
They're just reserved bit patterns. For anything you reserve like that, you just have to build special checks into the hardware.
@EebstertheGreat
@EebstertheGreat 5 жыл бұрын
Having literal + and - infinities is nice for improper integration in R. At least, I think the infinities there are IEEE 754 infinities.
@U014B
@U014B 5 жыл бұрын
2:23 Don't tell Numberphile you said that!
@brahmcdude685
@brahmcdude685 3 жыл бұрын
even more great stuff. how can i thank you????????????
@TheTwick
@TheTwick 5 жыл бұрын
Noob question: when they measure FLOPS (on a computer) are they performing additions, or subtractions, or multiplications...?
5 жыл бұрын
That doesn't really matter, since those operations are often considered as taking one cycle (at least on x86 when considering vector instructions). For example you can do 1 FLOP (addition/multiplication) or 2 FLOPs per cycle (FMA - fused multiply add) - times the width of the vector unit times the number of execution ports times the number of cores etc.
@lotrbuilders5041
@lotrbuilders5041 5 жыл бұрын
I think they expect a normal distribution for some type of program
@Bibibosh
@Bibibosh 5 жыл бұрын
This is mentally stimulating! PART 3!!!! PART 3!!!! PART 3 and i dont pee!
@eLBehmo
@eLBehmo 2 жыл бұрын
Can you please continue this series with decimal floating point math (IEEE 754-2008). You would be the first, for sure ;)
@YouPlague
@YouPlague 5 жыл бұрын
Why do you need 48bit register for addition? The result is 24bit anyway and you only preserve the most-significant bits, so the lower ones will always get discarded, won't they? So why do the actual addition of those? I would presume you only shift to make the exponents the same and then add two 24b registers together, then normalize and you're done with the mantissa.
@zombiedude347
@zombiedude347 5 жыл бұрын
You only need to do a "48-bit" calculation if the addition turns into a subtraction. You however, can't just discard the nest as they are required for rounding. You shift, keeping all the bits, then add as normal the first 24 bits. (the rest would be unnecessarily added to zero). Re-normalize if needed. Then, you keep the first 25 bits, replacing the rest with a 26th bit equal to zero if they are all zero, 1 otherwise. Assuming the most common rounding (ties to even), you then check the 24th, 25th, and 26th bits and round away from zero if they are (0.11/1.10/1.11), rounding towards zero otherwise. If it weren't ties to even, but ties always away, it would instead round away if any of the 3 bits were (0.10/0.11/1.10/1.11). If in ceiling "rounding" (positive+positive) or floor "rounding" (negative+negative), you round away for (0.01/0.10/0.11/1.01/1.10/1.11) However, if in truncate mode, ceiling mode (negative+negative), or floor mode (positive+positive), you don't round away from any combination of the bits.
@APaleDot
@APaleDot 5 жыл бұрын
Why do they use an offset of 127 in the exponent to put 0 in the center of the range, rather than just storing a int8 using two's-complement? Isn't that math for addition simpler using two's-complement?
@Roxor128
@Roxor128 5 жыл бұрын
Using the offset approach allows reserving bit patterns for 0, infinity, and not-a-numbers. Without a reserved pattern for 0, you can't store it due to the implied 1 before the radix point. Infinities and NaNs can be useful for figuring out if something went wrong. For IEEE 754 floats, an exponent field with all bits 0 and a mantissa of all bits 0 encodes zero. Note that I didn't mention any restrictions on the sign bit, which gives +0 and -0 values.
@fllthdcrb
@fllthdcrb 5 жыл бұрын
Not sure, but as far as I understand, basically, it allows for the bit patterns to be lexicographically compared (as long as they're regular positive numbers). You wouldn't get that if the exponent is in two's-complement, as negative exponents would make the number appear greater under such a comparison method. Another nice thing about this is that zero gets encoded with all zeros: sign bit is 0, which makes it positive (yes, positive!); exponent is all 0s, which is interpreted as -∞; and significand (or "mantissa", as it's informally called) is all 0s, which means 1.000000.... Thus, you get 1.000000... × 2^-∞, which is +0 (strictly speaking, it's infinitesimal). A bit off-topic, but... flip the sign bit, and you get -0. The zeros in floating-point are signed for this reason, and if you ignore exceptions, dividing them into some positive, non-zero value yields an infinity of the same sign. EDIT: Also, the exponent bias doesn't really complicate addition. Remember that to line up the significands, you only need to shift one by the _difference_ between the exponents, which will be the same with or without a bias. Now, multiplication does have a slight complication here, as you need to add their true values together. But it's really a very small price to pay, in the scheme of things.
@hrnekbezucha
@hrnekbezucha 5 жыл бұрын
Long live fixed point arithmetic
@elgalas
@elgalas 5 жыл бұрын
It's time for a React/Vue video... These are driving the web today!
@rogerbosman2126
@rogerbosman2126 5 жыл бұрын
No. This channel covers concepts and technologies, not frameworks. React/Vue is 100% not interesting in this regard, it's just a very popular (and great tbh) implementation of known stuff
@kc9scott
@kc9scott 5 жыл бұрын
3:11 he says "shift this one place to the left", when he's really shifting it right.
@kc9scott
@kc9scott 5 жыл бұрын
It looks like you could either shift the higher-exponent number left, or shift the lower-exponent number right, whichever is easier to implement.
@Tatiana-jt9hd
@Tatiana-jt9hd 5 жыл бұрын
so this is numberphile’s sister...
@Bibibosh
@Bibibosh 5 жыл бұрын
please make an entire series about binary mathematics. ..... 010111001001100101010101 hahahaah what number did i write? ... now multiply it by 00101 !!!!!!!!! MIND BLOWN
@alexloktionoff6833
@alexloktionoff6833 Жыл бұрын
It's not so simple... In IEEE754 All exponents are biased, more over exponents 0 & ~0 are reserved for especial meaning. For addition and multiplication h/w /*or s/w ;)*/ must use additional bits to make implicit 1 "explicit", one more for carry and then adjust the exponent controlling underflow/overflow corner cases. I can't imagine how this multi-steps operation could be made in one cycle.
@damnstupidoldidiot8776
@damnstupidoldidiot8776 2 жыл бұрын
There's still rounding if the last bit that was shifted away is one.
@josepablogil4943
@josepablogil4943 Жыл бұрын
Didn't know Philip Seymour Hoffman was into computers.
@hyf4229
@hyf4229 5 жыл бұрын
Actually the FADD operation is more complicated than what he says in the video. You must take care of denormalized numbers. Besides, adding 2 floating point numbers could lead to POSITIVE INF or NEGATIVE INF. If you subtract +INF from +INF, you should generate a qNaN result. All these factors make the hardware that execute FADD really complicated and slow..
@RossMcgowanMaths
@RossMcgowanMaths 2 жыл бұрын
Agreed. Take simple addition subtraction. 2 numbers can be pos or neg , giving 4 combinations , then add or sub gives you 8 , then one greater or less than other gives you 16. Write simple equation that takes care of all 16 combinations giving correct absolute value and correct sign. Then add in subnormals , rounding special cases then write test script to check validity, simulate in code , design build in CMOS layout test ....... Not as simple as adding two numbers. Have to add subtract every number adhering to IEEE 754. And that's just adding subtracting.
@miroslavhoudek7085
@miroslavhoudek7085 5 жыл бұрын
Now go and implement it on TIS-100.
@miroslavhoudek7085
@miroslavhoudek7085 5 жыл бұрын
@@merrickryman4853 I can't beat like 50% of the content in that game :-o Really got me to think about my skills gap
@clonatul1
@clonatul1 5 жыл бұрын
What about negative exponents?
@rwantare1
@rwantare1 5 жыл бұрын
Before the exponent is encoded, a 127 offset is added (only for 32 bit float), so 2^0 's exponent gets stored as 127. Therefore 2^-20 would get stored as 107. So you can have negative exponents all the way to -126 (-127+127 =0 which is reserved for 0)
@adriancruzat2711
@adriancruzat2711 5 жыл бұрын
In this particular 32 bit example, 127 is added to all exponents in order to allow for negative exponents. So for example if you have 2^1 it would be represented as 128 -> '10000000'. For something like 2^-1 it will be 127 + (-1) so 126 -> '01111110'. As for adding them together the process is the same. Say that you want to add 1.0 x 2^2 + 1.0 x 2^-1 (4 + 1/2). You would still shift the smaller number to the right the same number of spaces as the difference (3 spaces) and you would add 1.0 x 2^2 + 0.001 x 2^2 = 1.001 x 2^2 (4.5)
@clonatul1
@clonatul1 5 жыл бұрын
@@adriancruzat2711 wouldn't it be easier just to use the 2nd bit as the sign for the exponent and have 7 bits to represent the value? It's basically the same thing without he offset
@adriancruzat2711
@adriancruzat2711 5 жыл бұрын
@@clonatul1 In theory yes you could represent it using the second digit as a sign digit for the exponent (Or even better as Two's Compliment to avoid having 2 values for 0 [-0 and +0]). However as far as I can tell, the comparison between two floating points is much easier when the Exponent is encoded using the bias (offset).
@matsv201
@matsv201 5 жыл бұрын
That is pretty much (sorta, not quite) how you make division in floating points
@GilesBathgate
@GilesBathgate 5 жыл бұрын
42!
@skuzzbunny
@skuzzbunny 5 жыл бұрын
For all you "zero" aficionados out there.....!!!D
@eatpant1412
@eatpant1412 5 жыл бұрын
Wow I never knew multiplication was more complex than addition for floating-points.
@ais4185
@ais4185 5 жыл бұрын
*less complex ?
@eatpant1412
@eatpant1412 5 жыл бұрын
@@ais4185 Yes thanks I had a stroke
Floating Point Numbers (Part1: Fp vs Fixed) - Computerphile
15:41
Computerphile
Рет қаралды 156 М.
Floating Point Numbers - Computerphile
9:16
Computerphile
Рет қаралды 2,3 МЛН
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 10 МЛН
The Noodle Stamp Secret 😱 #shorts
00:30
Mr DegrEE
Рет қаралды 66 МЛН
Последний Закат Кота Макса...
00:21
Глеб Рандалайнен
Рет қаралды 7 МЛН
The Lightning Algorithm - Numberphile
12:24
Numberphile
Рет қаралды 539 М.
How to see a sphere in 4D
9:33
Crystal Math
Рет қаралды 233 М.
Floating Point Numbers
17:30
0612 TV w/ NERDfirst
Рет қаралды 118 М.
Ordered Dithering - Computerphile
10:35
Computerphile
Рет қаралды 128 М.
Langton's Loops: The cellular automaton that copies itself
12:01
davbrdavbr
Рет қаралды 388 М.
how floating point works
17:48
jan Misali
Рет қаралды 353 М.
What is a Number? - Numberphile
11:21
Numberphile
Рет қаралды 365 М.
Computer Speeds - Computerphile
6:17
Computerphile
Рет қаралды 167 М.
Multiplicative Persistence (extra footage) - Numberphile
7:12
Numberphile2
Рет қаралды 113 М.
Inside the CPU - Computerphile
11:16
Computerphile
Рет қаралды 358 М.
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 10 МЛН