Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 24 Jan 2018 18:32:55 -0500
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: Updating Unicode support

On Wed, Jan 24, 2018 at 02:53:18PM -0800, Eric Pruitt wrote:
> On Wed, Jan 24, 2018 at 02:25:06PM -0800, Eric Pruitt wrote:
> > On Wed, Jan 24, 2018 at 04:48:53PM -0500, Rich Felker wrote:
> > > > I updated my copy of musl to 1.1.18 then recompiled it with and without
> > > > my utf8proc changes using GCC 6.3.0 "-O3" targeting Linux 4.9.0 /
> > > > x86_64:
> > > >
> > > > - Original implementation: 2,762,774B (musl-1.1.18/lib/libc.a)
> > > > - utf8proc implementation: 3,055,954B (musl-1.1.18/lib/libc.a)
> > > > - The utf8proc implementation is ~11% larger. I didn't do any
> > > >   performance comparisons.
> > >
> > > You're comparing the whole library, not character tables. If you
> > > compare against all of ctype, it's a 15x size increase. If you compare
> > > against just wcwidth, it's a 69x increase.
> >
> > That was intentional. I have no clue what the common case is for other
> > people that use musl, but most applications **I** use make use of
> > various parts of musl, so I did the comparison on the library as a
> > whole.
> 
> If the size of utf8proc tables is a problem, I'm not sure how you'd go
> about implementing UCA without them in an efficient manner.

I don't think this is actually a productive discussion because the
metrics we're looking at aren't really meaningful. The libc.a size
doesn't tell you anything about how much of the code actually gets
linked when you use wcwidth, etc. Sorry, I should have noticed that
earlier.

> Part of the
> UCA requires normalizing the Unicode strings and also needs character
> property data to determine what sequence of characters in one string is
> compared to a sequence of characters in another string. Perhaps you
> could compromise by simply ignoring certain characters and not doing
> normalization at all.

There's currently nothing in libc that depends on any sort of
normalization, but IDN support (which has a patch pending) is related
and may need normalization to meet user expectations. If so, doing
normalization efficiently is a relevant problem for musl.

I forget if UCA actually needs normalization or not. I seem to
remember working out that you could just expand the collation tables
(mechanically at locale generation time) to account for variations in
composed/decomposed forms so that no normalization phase would be
necessary at runtime, but I may have been mistaken. It's been a long
time since I looked at it.

> Since the utf8proc maintainer seems receptive to my proposed change, I'm
> going to implement the collation feature in utf8proc, and if you decide

For sure.

> that utf8proc is worth the bloat, you'll get collation logic for "free."

That's not even the question, because we can't use outside libraries
directly. We could import code, but in the past that's been a really
bad idea (see TRE), or ideas/data structures behind the code, though.
It was probably a mistake of me to bring up the size/efficiency topic
to begin with, since it's not the core point, but I did want to
emphasize that the implementations we have were designed not just to
be simple and fairly small, but to be really small in the sense of
hard to make anything smaller.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.