5 points

Uhm, I haven’t programmed in a low level language in years. I use python for my job now, and all I know are floats and ints. I don’t know what this foreign language is you speak of.

permalink
report
reply
6 points

The only reason for floating point numbers is to use your laptop as a life buoy

permalink
report
reply
17 points

I have been thinking that maybe modern programming languages should move away from supporting IEEE 754 all within one data type.

Like, we’ve figured out that having a null value for everything always is a terrible idea. Instead, we’ve started encoding potential absence into our type system with Option or Result types, which also encourages dealing with such absence at the edges of our program, where it should be done.

Well, NaN is null all over again. Instead, we could make the division operator an associated function which returns a Result<f64> and disallow f64 from ever being NaN.

My main concern is interop with the outside world. So, I guess, there would still need to be a IEEE 754 compliant data type. But we could call it ieee_754_f64 to really get on the nerves of anyone wanting to use it when it’s not strictly necessary.

Well, and my secondary concern, which is that AI models would still want to just calculate with tons of floats, without error-handling at every intermediate step, even if it sometimes means that the end result is a shitty vector of NaNs, that would be supported with that, too.

permalink
report
reply
6 points

Nan isn’t like null at all. It doesn’t mean there isn’t anything. It means the result of the operation is not a number that can be represented.

The only option is that operations that would result in nan are errors. Which doesn’t seem like a great solution.

permalink
report
parent
reply
6 points

Well, that is what I meant. That NaN is effectively an error state. It’s only like null in that any float can be in this error state, because you can’t rule out this error state via the type system.

Why do you feel like it’s not a great solution to make NaN an explicit error?

permalink
report
parent
reply
2 points

Theres plenty of cases where I would like to do some large calculation that can potentially give a NaN at many intermediate steps. I prefer to check for the NaN at the end of the calculation, rather than have a bunch of checks in every intermediate step.

How I handle the failed calculation is rarely dependent on which intermediate step gave a NaN.

This feels like people want to take away a tool that makes development in the engineering world a whole lot easier because “null bad”, or because they can’t see the use of multiplying 1e27 with 1e-30.

permalink
report
parent
reply
1 point

idk if you ever had to actually work with floats,

but in statistics, you deal with NaNs all the time. Data is absent from the data set. If it would be an error every time, you wouldn’t get anything done.

permalink
report
parent
reply
2 points

It doesn’t have to “error” if the result case is offered and handled.

permalink
report
parent
reply
1 point

Float processing is at the hardware level. It needs a way to signal when an unrepresented value would be returned.

permalink
report
parent
reply
10 points

I agree with moving away from floats but I have a far simpler proposal… just use a struct of two integers - a value and an offset. If you want to make it an IEEE standard where the offset is a four bit signed value and the value is just a 28 or 60 bit regular old integer then sure - but I can count the number of times I used floats on one hand and I can count the number of times I wouldn’t have been better off just using two integers on -0 hands.

Floats specifically solve the issue of how to store a ln absurdly large range of values in an extremely modest amount of space - that’s not a problem we need to generalize a solution for. In most cases having values up to the million magnitude with three decimals of precision is good enough. Generally speaking when you do float arithmetic your numbers will be with an order of magnitude or two… most people aren’t adding the length of the universe in seconds to the width of an atom in meters… and if they are floats don’t work anyways.

I think the concept of having a fractionally defined value with a magnitude offset was just deeply flawed from the get-go - we need some way to deal with decimal values on computers but expressing those values as fractions is needlessly imprecise.

permalink
report
parent
reply
6 points

While I get your proposal, I’d think this would make dealing with float hell. Do you really want to .unwrap() every time you deal with it? Surely not.

One thing that would be great, is that the / operator could work between Result and f64, as well as between Result and Result. Would be like doing a .map(|left| left / right) operation.

permalink
report
parent
reply
1 point

Well, not every time. Only if I do a division or get an ieee_754_f64 from the outside world. That doesn’t happen terribly often in the applications I’ve worked on.

And if it does go wrong, I do want it to explode right then and there. Worst case would be, if it writes random NaNs into some database and no one knows where they came from.

As for your suggestion with the slash accepting Results, yeah, that could resolve some pain, but I’ve rarely seen multiple divisions being necessary back-to-back and I don’t want people passing around a Result<f64> in the codebase. Then you can’t see where it went wrong anymore either.
So, personally, I wouldn’t put that division operator into the stdlib, but having it available as a library, if someone needs it, would be cool, yeah.

permalink
report
parent
reply
47 points

Serious answer: Posits seem cool, like they do most of what floats do, but better (in a given amount of space). I think supporting them in hardware would be awesome, but of course there’s a chicken and egg problem there with supporting them in programming languages.

permalink
report
reply
21 points
*

Posits aside, that page had one of the best, clearest explanations of how floating point works that I’ve ever read. The authors of my college textbooks could have learned a thing or two about clarity from this writer.

permalink
report
parent
reply
3 points
*

I had the great honour of seeing John Gustafson give a presentation about unums shortly after he first proposed posits (type III unums). The benefits over floating point arithmetic seemed incredible, and they seemed largely much more simple.

I also got to chat with him about “Gustafson’s Law”, which kinda flips Amdahl’s Law on its head. Parallel computing has long been a bit of an interest for me I was also in my last year of computer science studies then and we were covering similar subjects at the time. I found that timing to be especially amusing.

permalink
report
parent
reply
25 points

Based and precision pilled.

permalink
report
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 1.9K

    Monthly active users

  • 1.1K

    Posts

  • 39K

    Comments