LOL! Man I learned that in college and never used it ever again. I never came across any scenarios in my professional career as a software engineer where knowing this was useful at all outside of our labs/homework.
Anyone got any example where this knowledge became useful?
In game dev it’s pretty common. Lots of stuff is built on floating point and balancing quality and performance, so we can’t just switch to double when things start getting janky as we can’t afford the cost, so instead we actually have to think and work out the limits of 32 bit floats.
So you have to remember how it's represented in the system with how the bits are used? Or you just have to remember some general rules that "if you do that, it'll fuck up."
How long has this career been? What languages? And in what industries? Knowing how floats are represented at the bit level is important for all sorts of things including serialization and math (that isn't accounting).
More than a surface level understanding is not necessary. The level of detail in the meme is sufficient for 99,9% of jobs.
No, that's not all just accounting, it's pretty much everyone who isn't working on very low level libraries.
What in turn is important for all sorts of things is knowing how irrelevant most things are for most cases. Bit level is not important, if you're operating 20 layers above it, just as business logic details are not important if you're optimizing a general math library.
The very wide majority of IT professionals don't work on emulation or even system kernels. Most of us are doing simple applications, or working on supporting these applications, or their deployment and maintenance.