Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 9 May 2013 15:21:57 +0200
From: Szabolcs Nagy <>
Subject: Re: Using float_t and double_t in math functions

* Rich Felker <> [2013-05-08 21:43:27 -0400]:
> As far as I can tell, in most of the affected code, keeping excess
> precision does not hurt the accuracy of the result, and it might even
> improve the results. Thus, nsz and I discussed (on IRC) the
> possibility of changing intermediate variables in functions that can
> accept excess precision from float and double to float_t and double_t.
> This would not affect the generated code at all on machines without
> excess precision, but on x86 (without SSE) it eliminates all the
> costly store/load pairs. As an example (on my test machine), it

ie. it is only for i386 (without sse)
(which is not a trendy platform nowadays)
but there it improves performance and
code size a bit so it is worth doing

at the same time all the STRICT_ASSIGN macros
can be removed (already a noop) which were
there to enforce store with the right precision
on i386 when musl is compiled without -ffloat-store,
but i dont think that should be supported

btw the other ugly macro that remains is
FORCE_EVAL to force evaluation of floating-point
expressions for their side-effect, which should
be eventually

#define FORCE_EVAL(expr) do{ \
expr; \
} while(0)

but no compiler supports this that i know of
so now we have volatile hacks with unnecessary

there are a few more 'volatile' in the code
but all should be possible to clean up
(fma and fmaf are probably exceptions similar

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.