LOL! Man I learned that in college and never used it ever again. I never came across any scenarios in my professional career as a software engineer where knowing this was useful at all outside of our labs/homework.
Anyone got any example where this knowledge became useful?
I agree we don’t generally need think about the technical details. It’s just good to be aware of the exponent and mantissa parts to better understand where the inaccuracies of floating point numbers come from.
In game dev it’s pretty common. Lots of stuff is built on floating point and balancing quality and performance, so we can’t just switch to double when things start getting janky as we can’t afford the cost, so instead we actually have to think and work out the limits of 32 bit floats.
So you have to remember how it's represented in the system with how the bits are used? Or you just have to remember some general rules that "if you do that, it'll fuck up."
Well, rules like “all integers can be represented up to 2^24” and “10^-38 is where denormalisation happens” are helpful, but I often have to figure out why I got a dodgy value from first principles, floating point is too complicated to solve every problem with 3 general rules.
I wrote a float from string function once which obviously requires the details (intentionally low quality and limited but faster variant since the standard version was way too slow).
Eh, if you use doubles and you add 0.000314 (just over 0.03 cents) to ten billion dollars you have an error of 1/10000 of a cent, and thats a deliberately perverse transaction. Its not ideal but its not the waiting disaster that using single precision is.
So if you handle different precisions you also need to store the precision/exponent explicitly for every value. Or would you sanitise this at input and throw an exception if someone wants more precision than the program is made for?
It depends on the requirements of your app and what programming language you use. Sometimes you can get away with using a fixed precision that you can assume everywhere, but most common programming languages will have some implementation of a decimal type with variable precision if needed, so you won't need to implement it on your own outside of university exercises.
Okay thank you. I was wondering because for stuff like buying electricity, gas or certain resources, parts etc. there is prices with higher precision in cents, but the precision would not be identical over all use cases in a large company.
No exponent, or at least a common fixed exponent. The technique is called "fixed point" as opposed to "floating point". The rationale is always to have a known level of precision.
How long has this career been? What languages? And in what industries? Knowing how floats are represented at the bit level is important for all sorts of things including serialization and math (that isn't accounting).
More than a surface level understanding is not necessary. The level of detail in the meme is sufficient for 99,9% of jobs.
No, that's not all just accounting, it's pretty much everyone who isn't working on very low level libraries.
What in turn is important for all sorts of things is knowing how irrelevant most things are for most cases. Bit level is not important, if you're operating 20 layers above it, just as business logic details are not important if you're optimizing a general math library.
The very wide majority of IT professionals don't work on emulation or even system kernels. Most of us are doing simple applications, or working on supporting these applications, or their deployment and maintenance.