Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 11 Oct 2022 13:46:54 -0400
From: Rich Felker <>
To: Markus Wichmann <>,
Subject: Re: Why thousands of digits in floatscan.c?

On Sun, Oct 09, 2022 at 12:35:50PM +0200, Szabolcs Nagy wrote:
> * Markus Wichmann <> [2022-10-09 10:45:08 +0200]:
> > Hi all,
> > 
> > so I recently read this wonderful paper:
> > 
> > Great read, I encourage it for everyone, even though some things are not
> > directly applicable to the C application programmer.
> > 
> > Anyway, towards the end of the paper, Goldberg makes the argument that a
> > 9 digit decimal number is always enough to uniquely identify a
> > single-precision number. Generalizing out from his argument, I
> > calculated that 36 decimal digits are needed for a quad-precision number
> > (the digit count for a w bit mantissa is roundup(w * lg 2) + 1, which is
> > 9 for single-prec, 17 for double-prec, 21 for double-extended, and 33
> > for double-double).
> > 
> > Now, I looked again at floatscan.c. The algorithm in decfloat() mostly
> > consists of reading all the decimal digits into a base 1 billion number,
> > which could have any length at all, then aligning the radix point with a
> > word boundary, and then shifting that number such that exactly one
> > mantissa worth of bits are left of the radix point. For the most part,
> > the algorithm does not care about the remaining bits except for two bits
> > of information: If the tail is equal to 0, or else how it compares to
> > 0.5. So now I am left wondering why then, if only the first
> > LDBL_MANT_DIG bits of the number, plus two more for the tail, ever count
> > for anything, the algorithm makes room for thousands of decimal digits.
> > Would it not be enough to save a few dozen? What purpose do the far-back
> > digits have?
> *_DECIMAL_DIG is enough if we know the digits are the decimal
> conversion of an actual fp number of the right type and right
> rounding mode.
> but e.g. it can be the digits of a number halfway between two fp
> numbers exactly. that sequence of digits would never occur when
> an actual fp number is printed but it is still a valid input and
> has to be read to the end to decide which way to round. (assuming
> nearest rounding mode.)
> the worst case is halfway between {FLT,DBL,LDBL}_TRUE_MIN and 0.
> and the digits of that would be
>  2^{-150,-1075,-16495} = 5^{150,..} * 10^{-150,..}
> i.e. 0. followed by many zeros and then the digits of 5^16495.
> depending on the last digit it has to be rounded up or down.

Yep, this.

Also, AFAICT it's impossible to tell before you know the exponent at
what point you might be able to discard further mantissa digits and
still get the right answer. There may be some shortcuts for this but
it's non-obvious, and if you need to buffer all the mantissa digits
anyway, the b1b representation is about the most efficient you can
make it.

Further, musl's floatscan raises FE_INEXACT exactly when the
conversion from a decimal string to binary floating point is inexact.
I don't think this is a requirement but it's a desirable property and
comes nearly free if you do the above right.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.