Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 15 Aug 2021 10:52:13 -0500 (CDT)
From: Ariadne Conill <ariadne@...eferenced.org>
To: musl@...ts.openwall.com
cc: Szabolcs Nagy <nsz@...t70.net>
Subject: Re: [PATCH #2] Properly simplified nextafter()

Hi,

On Sun, 15 Aug 2021, Stefan Kanthak wrote:

> Szabolcs Nagy <nsz@...t70.net> wrote:
>
>> * Stefan Kanthak <stefan.kanthak@...go.de> [2021-08-15 09:04:55 +0200]:
>>> Szabolcs Nagy <nsz@...t70.net> wrote:
>>>> you should benchmark, but the second best is to look
>>>> at the longest dependency chain in the hot path and
>>>> add up the instruction latencies.
>>>
>>> 1 billion calls to nextafter(), with random from, and to either 0 or +INF:
>>> run 1 against glibc,                         8.58 ns/call
>>> run 2 against musl original,                 3.59
>>> run 3 against musl patched,                  0.52
>>> run 4 the pure floating-point variant from   0.72
>>>       my initial post in this thread,
>>> run 5 the assembly variant I posted.         0.28 ns/call
>>
>> thanks for the numbers. it's not the best measurment
>
> IF YOU DON'T LIKE IT, PERFORM YOUR OWN MEASUREMENT!
>
>> but shows some interesting effects.
>
> It clearly shows that musl's current implementation SUCKS, at least
> on AMD64.

I would rather have an implementation that is 3.59 ns/call and is 
maintained by somebody who is actually pleasant to talk to.  In the grand 
scheme of things 3.59 ns/call, and even 8.58 ns/call are not a big deal 
for a function like nextafter().

If musl does wind up merging this, I intend to revert that merge in Alpine 
because I cannot trust the correctness of any code written by somebody 
with this attitude.

Ariadne

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.