zlacker

[parent] [thread] 8 comments
1. firebo+(OP)[view] [source] 2023-11-27 23:42:56
You just need posits! John Gustafson will teach you how. Amazing concept. It would be amazing to see it in more hardware. It's especially good for AI and numerous other applications. Puts floats to shame. Of course this guy starts with integers, which are great, but for floats posits are amazing especially if you can control the bits dedicated to each 'fraction'.
replies(2): >>chrisw+C6 >>lifthr+nq
2. chrisw+C6[view] [source] 2023-11-28 00:23:51
>>firebo+(OP)
Their range (excluding infinity) is slightly higher than floats, but still abysmally tiny compared to the BLC encodings used in the article.

PS: I still prefer type 1 unums, due to their fallback to intervals :)

replies(1): >>firebo+143
3. lifthr+nq[view] [source] 2023-11-28 03:10:04
>>firebo+(OP)
Posits would have been good if it was invented back when there was no standard floating point format. But it is not a tremendous upgrade from IEEE 754, which has lots of issues but so do posits. A customizable IEEE 754 would fare more or less equally to customizable posits, and AI workloads (especially for inference) shy away from floats nowadays because they need much finer control than what either IEEE 754 or posits can offer.
replies(1): >>firebo+a23
◧◩
4. firebo+a23[view] [source] [discussion] 2023-11-28 21:55:14
>>lifthr+nq
All you gotta do is look at the graphs to see posits are more accurate and precise for pretty much any given scenario, assuming the bits are distributed 'wisely'. Unfortunately, it's not easy to do in hardware and the hardware implementations suffer from rigidness.

I don't really want to get into the nitty gritty, as John will answer emails regarding this stuff. I've personally done so, and he's very polite and informative. I was using them for fractals, but using them in software, which was unfortunately very slow, but the results were amazing. I've read through his papers on them and it took me a while to really 'get it', but I did and oh man, even basic unums put floats to shame. While perhaps not a tremendous upgrade, I much prefer the distribution and accuracy and how there's far less overlap, NaNs, infinities, etc.

replies(1): >>lifthr+eK3
◧◩
5. firebo+143[view] [source] [discussion] 2023-11-28 22:01:46
>>chrisw+C6
Not really. I don't think you've read his papers. The implementations in hardware lack range, but soft posits can expand to ridiculous ranges, and comparing the math, posits are superior(imo, granted, they're more or less similar) to BLC, depending on how they're implemented, of course.
replies(1): >>chrisw+RL4
◧◩◪
6. lifthr+eK3[view] [source] [discussion] 2023-11-29 02:11:25
>>firebo+a23
I don't think you need to explain everything again because I am aware of posits' strengths over IEEE 754, that's why I acknowledged that first. But as an incremental improvement from IEEE 754, posits are just not enough to justify the switch. When people need something better served by posits, they don't use posits---they use non-standard variants of IEEE 754 (e.g. FTZ/DAZ, bfloat16).
replies(1): >>firebo+9B6
◧◩◪
7. chrisw+RL4[view] [source] [discussion] 2023-11-29 12:23:06
>>firebo+143
> posits are superior(imo, granted, they're more or less similar) to BLC, depending on how they're implemented, of course

I'm very confused that you say posits are "more or less similar" to Binary Lambda Calculus. Posits are an inert data encoding: to interpret a posit as a number, we plug its parts (sign, regime, exponent and fraction) into a simple formula to get a numerical value. Those parts can have varying size (e.g. for soft posits), but the range of results is fundamentally limited by its use of exponentials.

In constrast, BLC is a Turing-complete, general-purpose programming language. To interpret a BLC value as a number, we:

- Parse its binary representation into lambda abstractions (00), function calls (01) and de-Bruijn-indexed variables (encoded in unary)

- Apply the resulting term to the symbol `one-more-than` and the symbol `zero`

- Beta-reduce that expression until it reaches a normal form. There is no way, in general, to figure out if this step will finish or get stuck forever: even if will eventually finish, that could take trillions of years or longer.

- Read the resulting symbols as a unary number, e.g. `one-more-than (one-more-than zero)` is the number two

Posit-style numbers can certainly be represented in BLC, by writing a lambda abstraction that implements the posit formula; but BLC can implement any other computable function, which includes many that grow much faster than the exponentials used in the posit formula (e.g. this has a nice explanation of many truly huge numbers https://www.reddit.com/r/math/comments/283298/how_to_compute... )

replies(1): >>firebo+tA6
◧◩◪◨
8. firebo+tA6[view] [source] [discussion] 2023-11-29 21:23:31
>>chrisw+RL4
That's fun. I understand all of that and exactly where the error lies. But you don't. And I'm not gonna tell you. :)
◧◩◪◨
9. firebo+9B6[view] [source] [discussion] 2023-11-29 21:26:53
>>lifthr+eK3
I guess we just disagree. I find the distribution of posits very logical, and I can see your points, but I digress.
[go to top]