Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 29 Jun 2020 20:46:02 +0000
From: Pascal Cuoq <cuoq@...st-in-soft.com>
To: "musl@...ts.openwall.com" <musl@...ts.openwall.com>
Subject: Re: Posits support under Musl  libc?


> Posits are lightweight, fast

Ah, this is a difference in the interpretation of words. I meant lightweight in the sense that you can add two numbers in one instruction without linking with and calling additional emulation code, and fast in the sense that you can do from one to four additions per cycle depending on what else you have to do at the same time. [1]

> and produce the same reasults across platforms, something which IEEE 754 doesn't guarantee. To top that, IEEE 754 isn't even a standard but just a set of guidelines which are usually implemented incorrectly due to misinterpretation or lack of expertise. So in that sense, Posits are safer than Floating-point.

This is a remarkable exaggeration. The actual differences of interpretation between IEEE 754 implementations are extremely minor for basic operations (it would be inappropriate to start discussing them considering the current level of discourse). On the other hand, there are differences between platforms in the implementations of trigonometric and other math functions. This was a well-weighted decision by the IEEE 754 standardization committee, in order not to stifle research into better implementations for these functions. Surely you are not referring to such differences? Because I do not see how posits fix these (apart from having no implementations for math functions).

If you bother to investigate where differences in floating-point computations come from, as David Monniaux did [2], you end up with the conclusion that all the inconsistencies come from:

A- hardware-design decisions make in the 1980s when it was not interesting to implement more than one format, so that several FPU implemented the (widest feasible) 80-bit double-extended format and let compilers generate code that emulated single- and double-precision imperfectly from that. Compilers could have done better but the hardware design decisions are partially to blame. Modern desktop hardware provides single- and double-precision operations, of course, which IEEE 754 was smart enough to standardize before it was practical to have all three in a single FPU.

B- compilers who produce code that violate IEEE 754 even when the hardware offers a perfect implementation of it, e.g. generating the FMA instruction for code that contains a multiplication and an addition.

I reluctantly admit that you have a point: all the 0 processors who provide posit basic operations implement the exact same semantics, and the 0 compilers who support posits avoid the troublesome optimizations that plague some programming languages.

Note that Type I unums have been the greatest floating-point format ever invented (according to their inventor) for longer than posits have existed, so if we did have hardware implementing these ideas, it would be probably 60% Type I unum implementations, 35% Type II unums, and 5% posits. You would not actually get the same results across platforms for basic operations. In 2023, Gustafson will have a better idea yet. Better wait for unums Type IV.

[1] https://www.agner.org/optimize/instruction_tables.pdf
[2] https://hal.archives-ouvertes.fr/hal-00128124v5

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.