I think that your implementation of floor is wrong for negative numbers, because for floor you round towards 0 always, whereas floor should round towards -inf. So, for example, floor(-18.2) should be -19, and not -18 as you corrected for. This is also what happens in Python, and what is shown on the wikipedia page for IEEE-754.
@LowByteProductions6 ай бұрын
I looked it up, and you're right. I actually implemented truncate - which ironically is the thing I said I would implement, and then decided to call it floor instead (thinking they were interchangable). Thanks for setting my straight, and proving that rounding is always more complex than you think :D
@Omnicypher0016 ай бұрын
@@LowByteProductions you don't need fixed point, you can just do all the math with integers. Print a . wherever you want, when you render the number on the screen.
@LowByteProductions6 ай бұрын
@Omnicypher001 you're describing base-10 fixed point. This video talks about base-2 (binary fixed point), which makes better use of the representation space, and is able to perform operations cheaply by taking advantage of the way computers work.
@warvinn6 ай бұрын
@@Omnicypher001 You'd think that would work, but it falls apart as soon as you encounter e.g. multiplication. Let's say you have your number 1000 that you print as 1.000, but now when you do 1.000*1.000 you get 1000*1000=1000000 which you would print as 1000.000. You could use a tuple to keep track where the period needs to go but at that point your probably better off doing it like the video instead.
@amalirfan6 ай бұрын
@@warvinnyeah it is hard to abstract. It still works for smaller scale uses, for example getting percentages, you could do (x * p) / 100. You will have to do the conversions manually, but it is a nice option.
@typedeaf6 ай бұрын
All your content is top rate. Love the low level stuff that we dont need to know, but cant sleep w/o knowing.
@Casilios6 ай бұрын
What a timing: yesterday I decided to look into fixed point numbers because I was having some problems with my floating point rasterizer. This video is immensely helpful with getting a better understanding of fixed point numbers. I'm looking forward to learning about trig functions for this stuff.
@edgeeffect6 ай бұрын
This video is so good... taking high level concepts that we often think of as a simple, almost atomic, operation and breaking them down to the next lower level. I like to play with assembly language for very simillar reasons.
@LowByteProductions6 ай бұрын
Exactly!
@mattymerr7016 ай бұрын
The most annoying thing to me is that IEEE754 support in languages usually only ever support the binary case but decimal floating points also covered under IEEE754 are so much more useful even if they are slow. Things suck
@LowByteProductions6 ай бұрын
It probably would have been more successful in its own standard
@charlieking76006 ай бұрын
The worst part of floating point computations is that C and C++ don't provide exactly the same result on different hardware. It's crucial for scientific computation to have constant margin. And any mistake has capability to accumulate.
@mattymerr7016 ай бұрын
@@LowByteProductions I think you're very right
@mattymerr7016 ай бұрын
@@charlieking7600 afaik that is inherent to binary floating points and is governed by the machine epsilon which isn't consistent. That's why decimal floating points are so useful, they don't have the same issues with error
@angeldude1016 ай бұрын
Decimal is a terrible base to work with and it's a shame that it's what most of the world uses. If you're going to argue that 1/(2*5) + 2/(2*5) ≠ 3/(2*5), then I can just say that 1/3 + 1/3 ≠ 2/3, because 1/3 = 0.3333, and 0.3333 + 0.3333 = 0.6666 ≠ 0.6667 = 2/3, and if we hadn't standardized on base ten, we would still care a decent amount about thirds, but no one would care about tenths. Computers can't afford to use anything but the objectively simplest possible base, which is two. Inconsistent results is because IEEE wrote the spec too loosely and implementers didn't bother to make sure everything was accurate, instead calling what they got "good enough". This has nothing to do with floating point using binary.
@jamesq87443 ай бұрын
Fantastic video! Thanks so much for doing this!
@JobvanderZwan6 ай бұрын
You know what's also a surprisingly useful algorithm when dealing with fractions if all you have is integers? Bresenham's line algorithm! The whole "drawing a line" thing is a bit of a diversion of the true genius kernel of that algorithm: how to do error-free repeated addition of fractions, and only trigger an action every time you "cross" a whole-number boundary (in the canonical case: drawing a pixel). And all you need is three integers (an accumulator, a numerator and a denominator), integer addition, and an if-statement. Even the lowest-power hardware can do that!
@LowByteProductions6 ай бұрын
Ah yes, I've come across it before when building procedural generation for a roguelike!
@ArneChristianRosenfeldt6 ай бұрын
I have a hard time to accept that Bresenham is not just calculating with fractions as we did learn in school. Probably because we did not learn to manually calculate with floats.
@Optimus61286 ай бұрын
Also nowadays you can easily do a non Bresenham style with fixed point adds that performs as good if not slightly better. I was suspicious of those conditional jumps in the bresenham for modern CPUs relying on branch prediction and my fixed point implementation was easier to think around so I did use that instead. I would like to do a bresenham again though to performance compare between the too at some point.
@ArneChristianRosenfeldt6 ай бұрын
@@Optimus6128 I am stuck in the past. GBA or Jaguar. I don’t get why Jaguar hardware uses fixed points for lines, while the later PS1 seems to use Bresenham for edges.
@Optimus61286 ай бұрын
@@ArneChristianRosenfeldt Bressenham could be good for some old hardware. Then there is the previous thing everyone calls DDA, but there are bad and good implementations that anyone calls DDA so I don't know. What I did even in a hardware with ARM at the time the GP32, was to do something that I think people called DDA, but my version would do one division in the beginning of the line which I precalced with reciprocal fixed point MUL. But the difference was, later as I traverse through each pixel, I was just doing an ADD and a SHIFT and nothing else. So through pixel traversal it seemed doing less that Bresenham, but not beforehand.
@rogo73306 ай бұрын
struct timespec is a great example of the fixed-point integer number. You have tv_sec, which is just time_t signed integer type, and tv_nsec, which is a signed long type that only purpose of is to represent values from 0 to billion minus 1 (999,999,999) inclusive. With some helper functions you can do very robbust and easy math if you treat tv_nsec just as accumulator that when overflows adds 1 to tv_sec and when it underflows subtracts 1 from tv_sec. Easy, quick, no floats needed. Not all systems even have that kind of precision for timestamps, so having nsec precision is good enough.
@beyondcatastrophe_6 ай бұрын
I think what would have been nice to mention is that floating point is essentially scientific notation, i.e. 12.34 is 1.234e1, just that floats use 2^n instead of 10^n for the exponent, which is where the scaling you mention comes from
@LowByteProductions6 ай бұрын
Certainly - this is probably a lot clearer in the video I made about floating point a few years back. Though of course, part of what makes floats complex is the edge cases where that doesn't apply as smoothly: sub-normals/denormals, infinities, NaNs, etc
@aleksikuikka62716 ай бұрын
That's quite an important intuition. If you said that you calculated something in scientific notation with a fixed number of significant digits, nobody would think there's anything weird or arbitrary about it. There's also probably some argument to be made about the expected error in measurements made from natural processes following a normal distribution, where the error is likely proportional to the scale of the mean. Like if you're measuring a big number, you probably expect the error to be similarly 'big'. The alternate hypothesis would be that the deviation would be smaller the bigger scales you work with, so you'd expect the distribution to get thinner and shorter tailed, which doesn't immediately seem like a natural assumption to me. Software engineering wise, if your hardware has a floating-point unit, I don't think there's any unanimous argument to switch away from using your hardware to the fullest. If you don't know what you're doing with your fixed point numbers, you probably shouldn't be using them, you're probably just adding yourself unnecessary complexity in the best case (e.g., work with strange engineering units, add the logic and possible extra variables to do the calculations etc.), assuming you don't outright lose precision or performance due to the implementation. Whereas, if you do know what you're doing, and you have specific requirements to work with, where fixed point just works better to satisfy those requirements, then by definition you probably should be using that.
@JamesPerkins6 ай бұрын
One nice thing is that fixed point arithmetic gives you exactly the same result on every computer architecture, but floating point often does not... because floating point implementations make different choices with the least significant bits of number representation... not so much during simple arithmetic operations but definitely for reciprocal, trig, exponent and log. Sometimes close is not enough and exact identical results are more useful. Also, sometimes the generality of floating point requires more CPU cycles than equivalent fixed point operations....
@ArneChristianRosenfeldt6 ай бұрын
This is not true anymore because all modern CPUs expect you to use float vectors 64Bit following IEEE471 or so. Only legacy code on vintage 8087 uses 80 bits. Even MAC is defined up to all bits since 2001 or so. And why would transcendent functions on fixed point not be implemented using Taylor Series?
@JamesPerkins6 ай бұрын
@ArneChristianRosenfeldt Just saying, Ingenic X2100 MIPS, ARM Cortex-A53 and Intel Xeon give slightly different floating point behavior for 32-bit floating point. I do SIMD computer vision algorithm acceleration and those floating point units do not compute exactly the same results under all circumstances.
@ArneChristianRosenfeldt6 ай бұрын
@@JamesPerkins and this is not due to the compiler? Though, it should not reorder floating point instructions using algebra. Java used to save floats to memory on 8087 to force compliant rounding. If this does not achieve the result, why is there even this option? Isn’t it generally accepted that source code need to compile bit precise to find hacker attempts, and calculations need to also run bit precise to allow game replays on cross-platform games ( and client side calculations which match the server side, unless someone cheated ). Do those processors claim to do IEEE floats? The specs on rounding is already so long. It not only considers reproduction between CPUs, but even best possible results if some stores intermediate results as decimals.
@JamesPerkins6 ай бұрын
@ArneChristianRosenfeldt These are all IEEE 754 32-bit floating implementations. There are two ISAs I write to... the scalar floating point register ISA (traditional) and the vector SIMD. There are small differences in the least significant bits on certain operations. For the scalars, there are also some optional instructions implemented in more exact/slower and less exact/faster forms. Not all rounding modes are available on all architectures (esp. in the embedded architectures, replicating everything Intel does is a huge amount of additional gates). As long as I stick to the most exact and slower scalar instructions and common rounding methods, I'm usually within a least significant bit or two of exactly the same results on all architectures. When you go into the SIMD ISAs (SSE2, NEON, MSA) floating point acts generally similar, but the integer to float and back conversions, rounding mode limitations and incomplete (but faster and less gate) implementations creep in and start to make the results diverge more significantly. Which brings me back to my point... if you write code using fixed point arithmetic and standard integer operations, it's quite easy to write code which creates bit for bit identical results out to the smallest bit, as the integer operations are more consistently defined across the architectures. But it's also a lot more work, requires more careful optimization, and some operations are significantly slower. SSE is scary fast ( clock for clock). Intel must throw a huge amount of gates at that general floating point hardware that MIPS and ARM can't afford. It's quite a luxury.
@ArneChristianRosenfeldt6 ай бұрын
@@JamesPerkins Oh, that long video about rounding. Ah yeah, the argument was about a final conversion to decimal, but the rounding itself had to happen on every division(?) float to float. Ah, no it does not. I guess I have to read that up. I thought that floating point units do this round up even numbers and round down odd numbers for the mantissa. Ah, this may be difficult for division because I think that one algorithm goes from significant bits down to less significant and then back up. But still, we only need one more bit for rounding. For integer we just truncate. Would be nice to have this mode for all float units. I thought that floats give up normalization for small numbers to not have to do too much special operations.
@j.r.81766 ай бұрын
Instantly subscribed!
@caruccio6 ай бұрын
Really entertaining video. Thanks!
@0x1EGEN6 ай бұрын
Personally I loved how easy it is to do fixed point maths using integers. Floats is a complicated format and either needs a lot of code to emulate in software or a lot of silicon to do it in hardware. But for fixed point, all you need is an ALU :)
@edgeeffect6 ай бұрын
Nice that you did this in 32-bit... I've been looking for a "nice" 32-but fixed-point implementation for a long time... I have this idea of building a synthesizer on a network of PIC32s... and, floating point, aint nobody got time for that! ... I had in mind to do this in Zig... because then I could use `comptime` to turn my human readable constants into my chosen fixed-point format. But this is entirely an armchair theoretical project at the moment.
@LowByteProductions6 ай бұрын
Do it! It sounds like an awesome project. (And I love Zig by the way - I have to find a way to get it into the channel soon)
@edgeeffect6 ай бұрын
@@LowByteProductionsI'm thinking, though, that in the end, I may have to stick to C++ just so that I can have operator overloading... to be able to write my expressions in a "nicer" format.
@benholroyd5221Ай бұрын
This is outside my wheelhouse, but don't you just need to represent a value between 1 and -1? (Or 0???) So can't you just represent 1 as int max and -1 as int min? I'm just thinking back from the output (pwm or dac) that isn't going to be FP.
@Burgo3616 ай бұрын
This was really interesting I might actually try implementing it myself for a bit of fun.
@fresnik6 ай бұрын
Not that there's an error in the code, but at 1:05:00 it looks like you accidentally replaced the fp_ceil function, so the test case for fp_ceil for whole numbers is actually never calling fp_ceil(), just converting a float to fp and back again.
@LowByteProductions6 ай бұрын
🤦♂️
@luczeiler23176 ай бұрын
Awesome. Subscription well earned!
@graydhd8688Күн бұрын
holy crap, it's just xeno's paradox lol.every bit to the right gets half way closer to the final accurate result, then half way closer the next shift, then half way again, only we are limited by the number of bits available.
@spacedoctor5620Ай бұрын
For mul and div, if we wanted a 64 bit implementation are we out of luck since we would need to upscale to a 128 bit int? For context, I'm running a physics simulation with gravity and collisions, and if you use 64 bit ints you can get down to millimeter resolution with a max distance of nearly a light year (10^15 meters). 32 bit ints would crush this max distance all the way down to a mere million meters instead (not even as big as Mercury).
@kilwo6 ай бұрын
Also, fp_floor for positive numbers is just fp_floor(a+ Half) and negative is Fp_floor(a-Half)
@DMWatchesYoutube6 ай бұрын
Any thoughts on posits?
@argbatargbat86456 ай бұрын
What about a video on tips/tricks on how to avoid the floating point issues when doing calculations?
@LowByteProductions6 ай бұрын
Besides the obvious ones (be careful with things like divisions by zero, passing invalid out-of-range values to functions like asinf, etc), I'd say the main thing is being aware, and careful with, the idea that the smallest possible value changes as you move through the range of floating point numbers. For very large numbers, there are relatively few numbers in between each integer. Adding a very tiny number to a very large one can result in no change at all. Edit: just noticed you asked for a video. Maybe one day!
@aalawneh914 ай бұрын
Thanks for the video, a question, what if my hardware doesn’t support float points operations, how would do the conversion between float and fixed point . As you multiply the float by scale to convert in your case
@aalawneh914 ай бұрын
Do we just relay on the software emulation shipped with the compilers e.g. gcc
@davidjohnston42406 ай бұрын
I've implemented plenty of fixed point arithmetic in DSP data paths in wireless communication chips.
@LowByteProductions6 ай бұрын
I'd love to hear more! Was this on custom ASICs?
@davidjohnston42406 ай бұрын
@@LowByteProductions Yes. Usually wireless modems for bluetooth and wifi and arcania like hiperlan. The modem used arithmetic over for things like MLSE algorithms. Given a range of inputs from the DACs, you can compute the number of bits of precision that is needed to represent all the information to the end of the computation. Make the fixed point integer and fractional parts that big and you can do the compuation with no loss. That was in the past. I've moved onto cryptography which mostly deals with finite field arithmetic so doesn't use fixed point. The implementations use integers (representing powers of polynomials in extension fields of GF(2)) but the security analysis uses huge floating point values (E.G. 4096 digits) in order to measure tiny biases in bit probabilities. Fixed point, Floating Point, GF, rationals or integers - use what the application is calling for.
@terohannula306 ай бұрын
Haven't watched whole video yet, but at 43:30, shouldn't argument "a" be converted to xl type first, and then shifted. edit. ah good it got fixed pretty soon in the video 😄
@ligius36 ай бұрын
You can do sin/cos with your library, but you already know this, just being a bit pedantic. It's the Taylor expansion but it's quite compute-heavy. You can do it without division by using some precomputed polynomials. And there's the preferred way, which you will probably present next. Hopefully it's not lookup tables :)
@LowByteProductions6 ай бұрын
Yep, taylor works well in a lot of cases, though because of the factorial divisors, you end up having to deal with either really big or really small numbers. In a 32 bit integer, to get at least 4 terms, you need to dedicate 19 fractional bits. That's fine in many cases, but if your bit division is more middle of the road, a 1KiB quarter wave lookup table with linear interpolation can get you better results with less computation. The method I'm covering next is CORDIC, which is lesser used in the micro world these days because memory and multiplies are relatively cheap and available, but it works on just adds and shifts and has great precision.
@kilwo6 ай бұрын
In fp_ceil, why use the fp_frac function. Wouldn’t it be quicker to just AND with the frac mask and check if the value is greater than 0. Given that we don’t actually use the value, just the presence of any bit set would be enough to know it’s got a fractional part.
@Blubb3rbub6 ай бұрын
Would it be worth it to make those functions and macros branch free? Or does the compiler do it already? Is it even possible? Or not worth it?
@LowByteProductions6 ай бұрын
It certainly could be! It depends on the intensity of the workload, and the environment you're running on. Many micros don't have sophisticated branch prediction, so you wouldn't expect to lose too much perf to speculative execution. And of course the branching code is not in vastly different regions, and would likely be in cache either way - so no expected latency there. But the key is always to measure! Intuition is often wrong about these kinds of things.
@skilz80986 ай бұрын
This is a really nice demonstration by example, and it does have great utility. However, there is one vital part to any type of mathematical or arithmetic library especially when it is being evaluated within a computational framework, context or domain especially within the integer domain, and that is integer division in regard to its remainder as opposed to just the division itself. No such library is complete without having the ability to perform the modulus operator. Not all but many languages use % to represent this type of operation. It would be nice to see a follow up video extending this library to include such a common operation. Even though the modulus operator itself is fairly considered an elementary or basic operation or operator, its implementation is complex enough that it would almost warrant its own separate video. Why do I mention this? It's quite simple. If one wants to use this as an underlying math library and wants to extend this into using it within other domains such as with performing or evaluating trigonometric functions such as sine, cosine, tangent, exponential functions such as e^n or even logarithmic functions as well as extending into other types of number systems such as in various vector spaces, particularly but not limited to the complex numbers; having the modulus operator as being an already well defined and operational operator between two operands is vital into performing most other complex types. In simple terms, the modulus operator (%) is just as important or significant as other operators such as +, -, *, /, ^, root (exp, rad). And this is just the arithmetic half, there is still the logical half of operators. Other than that, great video!
@faust-cr3jk6 ай бұрын
When you use fixed point, usually your main objective is keeping your resolution as small as possible. Therefore dedicating a large number of bit to the integer part seems wrong to me. What I usuall do is dedicating one bit for sign (if any), one bit for integer part and all remaining bits for fractional parts. To do so, you need to normalise all values first. Furthermore, I found that 16 bits for fractional part is more then enough. This is why fixed point in FPGAs uses typically 18 bits.
@doodocina6 ай бұрын
1:21:26 the compiler does this automatically, lol...
@markrosenthal91086 ай бұрын
Yes, decimal arithmetic is essential for exact arithmetic. But... Instead of the extra code for scaled integers or decimal data types in custom or provided libraries, you can just do this: 01 WS-ORDER-TOTAL PIC 9(4)V99 VALUE 40.50. ADD 1.50 TO WS-ORDER-TOTAL Still used in critical systems today and introduced in 1960. So understandable that even an auditor can check it. :-)
@LowByteProductions6 ай бұрын
Awesome! How do I implement digital signal processing on top of this 😁
@markrosenthal91086 ай бұрын
@@LowByteProductions Assuming that floating point is "good enough" for signal processing: 01 WS-FREQUENCY-AVERAGE-CHANGE VALUE 40.50 COMP-2. 🙂
@johncochran84976 ай бұрын
The issues with floating point vs fixed point is quite simple. Floating point - Why the hell are you looking at those digits, you ought to damn well know that format doesn't support that many significant digits. Fixed point - Why the hell are you looking at those digits, you ought to damn well know that your data doesn't justify that many significant digits. To illustrate, the vast majority of numbers you manipulate on a computer are actually approximations of some other non-representable exact value. Fixed point suffers from what's called "false precision". To illustrate, I'll calculate the circumference of a circle with a diameter of 123. I'll do it twice. Once with a fixed point decimal format with 5 integer digits and 5 fractional digits. Again, with a floating point format with 8 mantissa digits and an exponent from -49 to 49. So we have PI * 123. Let's see what happens: Fixed point 123 * 3.14159 = 386.41557 Floating point 123 * 3.1415927 = 386.41590 Actual value to 10 digits = 386.4158964 The thing to notice is that the fixed point value's last 2 digits are WRONG. They are wrong even though the multiplication occurred with no rounding or overflow. The reason for the error is as I said earlier, most numbers manipulated by computers are approximations of some other non-representable exact value. In this case, the approximation for pi only had 6 significant figures and as such, you can't expect more than 6 figures in the result to be correct. For the floating point case, the approximation for pi had 8 significant figures and as such, its result is correct to 8 places. False precision is a definite problem with fixed point math. And it's a rather insidious problem since the actual mathematical operations are frequently done with no overflow or rounding. But you can't trust your results for any more digits than the smallest number of digits used for your inputs or that of any intermediate results. But with floating point, the number of significant digits remains relatively constant.
@ashelkby6 ай бұрын
Actually 10011100 is -100 is two's complement representation.
@LowByteProductions6 ай бұрын
Ah you're right, not sure what happened there
@misterkite6 ай бұрын
The quickest way I use to explain fixed point is instead of $4.20, you have 420 cents.. it's obvious those are the same even though 4.2 != 420
@LowByteProductions6 ай бұрын
Yes, base 10 fixed point is really intuitive!
@notnullnotvoid6 ай бұрын
Surprisingly, integer multiplication and division are generally slower than floating point multiplication and division on modern x86/x64 CPUs! I have no idea why as I'm not a hardware guy, I just spend too much time reading instruction tables.
@ethandavis73106 ай бұрын
fewer bits to multiply in float. rest are just added
@LowByteProductions6 ай бұрын
Not sure I'd be able to say why either, but it could have something to do with there being quite a lot more floating point arithmetic stages in the CPU pipeline of a modern processor than there are integer ops 🤔
@Optimus61286 ай бұрын
Casey Muratori recently was asked about this in a Q&A. Someone asked even if the bits are the same between let's say a 32bit integer and a float, there are differences in cycles. Casey replied that he is not a hardware expert so he doesn't know for sure, but he said it could be that different CPUs might dedicate more or less wafer space for the integer or floating point, like it's a business decision where they decide what to cut and where to dedicate more circuit or something.
@rolandzfolyfe83606 ай бұрын
1:27:20 been there, done that
@weicco6 ай бұрын
Old trick. Do not use decimals but multiple the value so you get rid of decimals. Works with weight and money calculation at least.
@LowByteProductions6 ай бұрын
Absolutely - way older than floating point and much more deterministic
@weicco6 ай бұрын
Of course there is a down side to it. In bookkeeping and banking software we want to use 6 decimals at least. So 32bit number get small quite fast. Luckily almost everyone has 64bit machines these days so this not an issue anymore.
@Girugi6 ай бұрын
That trick only works if you do any real math. So not really a solid solution for anything but very simple stuff. 0.001 * 0.5 = 0.0005 (10*500)/10000 = 5 != 0.0005
@LowByteProductions6 ай бұрын
All the complex systems built on DSPs or FPGAs would beg to differ(radar, rockets, phased arrays, etc)
@Girugi6 ай бұрын
@@LowByteProductions well, true, you just need to apply the devision by the decimal offset of one of the factors after every multiplication. But if you devide by a value like this you have to the multiply by the decimal offset to keep it in sync... Not sure if that would hold up in all cases and would be very easy to run out of bits.
@_bxffour6 ай бұрын
🎉
@flameofthephoenix83956 ай бұрын
0:05 Sometimes? What is that supposed to mean? Fixed point is always better 100% of the time.
@benholroyd5221Ай бұрын
Fixed point is better, at best 99.00099999978% of the time, if for no other reason that for very small or large numbers, like the percentage above, floating point can better represent it.
@flameofthephoenix8395Ай бұрын
@@benholroyd5221 Nah... You made that number up just now, doesn't actually exist. Kind of like that PI conspiracy theory.
@MrMadzina6 ай бұрын
for fp_abs why not just return abs of a? return abs(a); seems to work fine in C#: public FixedPoint Abs() { return new FixedPoint(Math.Abs(Value)); }
@LowByteProductions6 ай бұрын
Nice! The reason I didn't use it in the video is because this implementation allows the user to provide the integer type. The C library absolute functions are type dependant, so it would go against that aspect.
@MrMadzina6 ай бұрын
Also in C# Math.Floor(-18.2f) returns -19
@施凱翔-n3y6 ай бұрын
😀
@Matt20106 ай бұрын
For FFT, Floating point is way better, be prepared to wait a longer time with fixed point.
@LowByteProductions6 ай бұрын
What about the FFT algorithm would make floating point intrinsically faster?
@redoktopus30476 ай бұрын
One day we'll get hardware support for posits and then all of this will be solved
@LowByteProductions6 ай бұрын
I'm definitely no expert on posits, but from a hardware point of view, I think they'd be at least as complex as floats. I could be totally off base though
@redoktopus30476 ай бұрын
@@LowByteProductions they would be complicated for sure, but i think they'd be slightly simpler than floats. but their use for programming is where i think their potential is. right now they can only be simulated in software so they are slow. floats are something i hope we most past in the next 10 years.
@benholroyd5221Ай бұрын
@@LowByteProductions"off base" intended pun?
@StarryNightSky5876 ай бұрын
IEEE 754 entered the chat
@panjak3236 ай бұрын
True heroes use fractions and or binary coded decimal 😅
@LowByteProductions6 ай бұрын
🫡
@sjswitzer16 ай бұрын
Slide rules
@LowByteProductions6 ай бұрын
I love playgrounds as much as the next guy, but what does it have to do with fixed point math?
@sjswitzer16 ай бұрын
@@LowByteProductions with sliderules you maintain the decimal point implicitly (in your mind) much as the binary point is implicit in fixed-point math.