Date: Mon, 27 Feb 2012 17:24:37 -0500 From: Rich Felker <dalias@...ifal.cx> To: musl@...ts.openwall.com Subject: Re: libm On Mon, Feb 27, 2012 at 10:02:53PM +0100, Szabolcs Nagy wrote: > * Rich Felker <dalias@...ifal.cx> [2012-01-23 12:07:15 -0500]: > > On Mon, Jan 23, 2012 at 05:41:52PM +0100, Szabolcs Nagy wrote: > > > i've looked into libm implementations > > > to figure out what's best for musl > > > > > the extended precision algorithms are reused across > .... > > > > Any ideas how the different ones evolved (separately written or common > > ancestor code, etc.) and where we should look to pull code from? > > > > meanwhile i looked more into libm design issues > > here are some questions i come up with: > for background issues see > http://nsz.repo.hu/libm > > Code organization: > (ldX is X bit long double) > > Do we want ld128? Only for systems where long double is 128-bit, and so far, we don't support any such systems. I'd say it's very low priority right now. > Should we try to use common code for ld80 and ld128? This would be nice but I doubt it's feasible, especially without a lot of divergence from upstream. > How to do ld64: wrap double functions or alias them? Wrap. Alias is non-conformant; distinct functions visible to the application (and not in the reserved namespace) must not have the same address. I don't mind bending this rule for non-standard functions that just exist as weak aliases for compatibility with legacy systems, since conformant applications cannot use them anyway, but for standard functions this is really a conformance issue. Hopefully the wrapers will compile to a single tail-call jump instruction anyway. > How to tie the ld* code to the arch in the build system? Just put all the files in place, and put #if LDBL_MANT_DIG==... etc. in the files to make them compile to empty .o files on platforms where they're not needed. > Make complex optional? At this point I don't see a good reason to make anything optional. For static linking it won't get pulled in unnecessarily anyway, and for dynamic linking, the whole libc.so is still very small. If size becomes an issue in the future, I'd like a more generic way of choosing what symbols to link into the .so at link time rather than adding lots of #ifdef to the source. > Keep complex in math/ or cmath/? I'm fairly indifferent to the issue, especially since with the above comments in mind it shouldn't affect the build system. If math/ is too cluttered already, cmath/ may be nice. > Workarounds: > > Use STDC pragmas (eventhough gcc does not support them)? There may be another -f option we need to make gcc extra-attentive to standards for floating point... > Use volatile consistently to avoid evaluation precision and const > folding issues? This might be a nice extra precaution, and might be best if we can't find any other consistently-working fix. > Use special compiler flags against unwanted optimization > (-frounding-math, -ffloat-store)? I think these are what we want. My understanding is that they behave similarly to the STDC pragmas. If we add both the STDC pragmas (which gcc will ignore) and -frounding-math and -ffloat-store, then either GCC or a C99 conformant compiler should generate correct code. > Do inline/macro optimization for small functions? (isnan, isinf, > signbit, creal, cimag,..) > In complex code prefer creal(), cimag() or a union to (un)pack re,im? My preference would be to use the standard function-like-macro names (isnan, creal, etc.) internally. They should expand to something just as efficient as the union unpacking stuff used internally, but actually be portable and clear to somebody reading the code. Of course if it's a lot of gratuitous changes from upstream I'd be willing to leave it alone, as long as the upstream code is not invoking UB. > Code cleanups: > > Keep diffability with freebsd/openbsd code or reformat the code for > clarity? > Keep e_, s_, k_ fdlibm source file name prefixes? I'm fairly indifferent on the matter. > Should 0x1p0 float format be preferred over decimal format? I would prefer hex floats so the exact value is clear and so there's no danger of toolchain bugs causing the constants to be miscompiled. > Should isnan, signbit,.. be preferred over inplace bithacks? Yes, but see above.. > Is unpacking a double into 2 int32_t ok (signed int representation)? Personally I find it really ugly, especially since it introduces an endian dependency. And there may be UB issues with signed overflow unless the code is very careful. uint64_t is the ideal unpacking format for double. > Is unpacking the mantissa of ld80 into an int64_t ok? Yes, and this has an unavoidable endian issue... Actually uint64_t would be better. A valid, finite, normal ld80 will always be negative as a signed integer, by the way (because the leading 1 is mandatory but not implicit). Rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.