I've wondered why programming languages don't include accurate fractions as part of their standard utils. I don't mind calling dc, but I wish I didn't need to write a bash script to pipe the output of dc into my program.
Because at the end of the day everything gets simplified to a 1 or a 0. You could store a fraction as an “object” but at some point it needs to be turned into a number to work with. That’s where floating points come into play.
You can only store rational numbers as a ratio of two numbers, and there's infinitely times more irrational numbers than rational ones - as soon as you took (almost any) root or did (most) trigonometry, then your accurate ratio would count for nothing. Hardcore maths libraries get around this by keeping the "value in progress" as an expression for as long as possible, but working with expressions is exceptionally slow by computer standards - takes quite a long time to keep them in their simplest form whenever you manipulate them.
You could choose a subset of fractions, though, and then round it to the nearest one. Maybe you could use powers of two as the denominator for easy hardware implementation. Oh wait, we've just reinvented floats.
A lot of work has gone into making floating point numbers efficient and they cover 99% of use cases. In the rare case you really need perfect fractional accuracy, it's not that difficult to implement as a pair of integers.
I think the reason is that most real numbers are gonna be the result of measurement equipment (for example camera/brightness sensor, or analog audio input). As such , these values are naturally real (analog) values, but they aren't fractions. Think of the vast amount of data in video, image and audio files. They typically make up a largest part of the broadband internet usage. As such, their efficient handling is especially important, or you're gonna mess up a lot of processing power.
Since these (and other) values are typically real values, they are represented by IEEE-754 floats, instead of fractions.
Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can't justify the extra computational expense for the little benefit it would have, also I'd bet there are already open source libraries for all the popular languages of you really need a fraction.
I' assume its because implemenring comparisons can't be done efficiently.
You'd either have to reduce a fraction every time you perform an operation. That would essentially require computing at least one prime decomposition (and the try to divide rhe other number by each prime factor) but thats just fucking expensive.
And even that would just give you a quick equality check. For comparing numbers wrt </> you'd then have to actually compute the floating point number with enough percesion or scale the fractions which could easily lead to owerflows (comparing intmax/1 and 1/intmax would amount to comparing intmax^2/intmax to 1/intmax. The emcodinglengh required to store intmax^2 would be twice that of a normal int... Plus you'd have to perform that huge multiplication).
But what do you consider enough? For two numbers which are essentially the same except a small epsilon you'd need infinite percision to determine their order. So would that standard then say they are equal even though they aren't exectly? If so what would be the minimal percision (that makes sense for every concievable case? If not, would you accept the comparison function having an essentially unbounded running time (wrt to a fixed encoding lengh)? Or would you allow a number to be neither smaller, nor bigger, nor equal to another number?
why couldn’t you compute p/q < r/s by checking ps < rq? if you follow the convention that denominators have to be strictly positive then you don’t even have to take signs into account. and you can check equality in the same way. no float conversion necessary. you do still need to eat a big multiplication though, which kind of sucks. the point you bring up of needing to reduce fractions after adding or multiplying also a massive problem. maybe we could solve this by prohibiting the end user from adding or multiplying numbers
It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I'm probably wrong on that but reducing a fraction would be less trivial I think?)
Technically, floating point also imitates irrational and whole numbers as well. Not all numbers though, you'd need a more uhm... elaborate structure to represent complex numbers, surreal numbers, vectors, matrices, and so on.
Actually, you can consider RGB values to be (triplets of) floats, too.
Typically, one pixel takes up up to 32 bits of space, encoding Red, Green, Blue, and sometimes Alpha (opacity) values. That makes approximately 8 bits per color channel.
Since each color can be a value between 0.0 (color is off) and 1.0 (color is on), that means every color channel is effectively a 8-bit float.