Uhm, I haven’t programmed in a low level language in years. I use python for my job now, and all I know are floats and ints. I don’t know what this foreign language is you speak of.
The only reason for floating point numbers is to use your laptop as a life buoy
I have been thinking that maybe modern programming languages should move away from supporting IEEE 754 all within one data type.
Like, we’ve figured out that having a null
value for everything always is a terrible idea. Instead, we’ve started encoding potential absence into our type system with Option
or Result
types, which also encourages dealing with such absence at the edges of our program, where it should be done.
Well, NaN
is null
all over again. Instead, we could make the division operator an associated function which returns a Result<f64>
and disallow f64
from ever being NaN
.
My main concern is interop with the outside world. So, I guess, there would still need to be a IEEE 754 compliant data type. But we could call it ieee_754_f64
to really get on the nerves of anyone wanting to use it when it’s not strictly necessary.
Well, and my secondary concern, which is that AI models would still want to just calculate with tons of floats, without error-handling at every intermediate step, even if it sometimes means that the end result is a shitty vector of NaN
s, that would be supported with that, too.
Nan isn’t like null at all. It doesn’t mean there isn’t anything. It means the result of the operation is not a number that can be represented.
The only option is that operations that would result in nan are errors. Which doesn’t seem like a great solution.
Well, that is what I meant. That NaN
is effectively an error state. It’s only like null
in that any float can be in this error state, because you can’t rule out this error state via the type system.
Why do you feel like it’s not a great solution to make NaN
an explicit error?
Theres plenty of cases where I would like to do some large calculation that can potentially give a NaN at many intermediate steps. I prefer to check for the NaN at the end of the calculation, rather than have a bunch of checks in every intermediate step.
How I handle the failed calculation is rarely dependent on which intermediate step gave a NaN.
This feels like people want to take away a tool that makes development in the engineering world a whole lot easier because “null bad”, or because they can’t see the use of multiplying 1e27 with 1e-30.
Float processing is at the hardware level. It needs a way to signal when an unrepresented value would be returned.
I agree with moving away from float
s but I have a far simpler proposal… just use a struct of two integers - a value and an offset. If you want to make it an IEEE standard where the offset is a four bit signed value and the value is just a 28 or 60 bit regular old integer then sure - but I can count the number of times I used floats on one hand and I can count the number of times I wouldn’t have been better off just using two integers on -0 hands.
Floats specifically solve the issue of how to store a ln absurdly large range of values in an extremely modest amount of space - that’s not a problem we need to generalize a solution for. In most cases having values up to the million magnitude with three decimals of precision is good enough. Generally speaking when you do float arithmetic your numbers will be with an order of magnitude or two… most people aren’t adding the length of the universe in seconds to the width of an atom in meters… and if they are floats don’t work anyways.
I think the concept of having a fractionally defined value with a magnitude offset was just deeply flawed from the get-go - we need some way to deal with decimal values on computers but expressing those values as fractions is needlessly imprecise.
While I get your proposal, I’d think this would make dealing with float hell. Do you really want to .unwrap()
every time you deal with it? Surely not.
One thing that would be great, is that the /
operator could work between Result
and f64
, as well as between Result
and Result
. Would be like doing a .map(|left| left / right)
operation.
Well, not every time. Only if I do a division or get an ieee_754_f64
from the outside world. That doesn’t happen terribly often in the applications I’ve worked on.
And if it does go wrong, I do want it to explode right then and there. Worst case would be, if it writes random NaN
s into some database and no one knows where they came from.
As for your suggestion with the slash accepting Result
s, yeah, that could resolve some pain, but I’ve rarely seen multiple divisions being necessary back-to-back and I don’t want people passing around a Result<f64>
in the codebase. Then you can’t see where it went wrong anymore either.
So, personally, I wouldn’t put that division operator into the stdlib, but having it available as a library, if someone needs it, would be cool, yeah.
Serious answer: Posits seem cool, like they do most of what floats do, but better (in a given amount of space). I think supporting them in hardware would be awesome, but of course there’s a chicken and egg problem there with supporting them in programming languages.
I had the great honour of seeing John Gustafson give a presentation about unums shortly after he first proposed posits (type III unums). The benefits over floating point arithmetic seemed incredible, and they seemed largely much more simple.
I also got to chat with him about “Gustafson’s Law”, which kinda flips Amdahl’s Law on its head. Parallel computing has long been a bit of an interest for me I was also in my last year of computer science studies then and we were covering similar subjects at the time. I found that timing to be especially amusing.
Based and precision pilled.