
Date: Wed, 13 Sep 2017 14:10:10 0400 From: Rich Felker <dalias@...c.org> To: musl@...ts.openwall.com Subject: Re: Wrong info in libc comparison On Wed, Sep 13, 2017 at 03:51:54PM +0200, Markus Wichmann wrote: > Hello, > > there's a mistake on the libc comparison page > http://www.etalabs.net/compare_libcs.html: Namely it states that glibc > uses introsort as sorting algorithm. It doesn't. Glibc uses a > bogstandard merge sort as main sorting algorithm. A major part of the > implementation is actually just devoted to optimized copying, and for > arrays of large objects it uses an interesting way to indirectly sort > them (i.e. it then allocates an array of references, sorts the > references, then uses a clever algorithm to get from sorted references > to a sorted array). But it's all just a standard merge sort. > > However, merge sort on arrays requires a linear amount of scratch space, > so this merge sort has to allocate memory. Memory allocation is allowed > to fail, but sorting isn't, so, as a fallback, in case the allocation > fails (or would use more than half the physical memory, for some > reason), it falls back to quicksort. This quicksort is implemented with > a really funky scheme for an explicit stack (i.e., while I'd use > > push_total_problem(); > while (stack_not_empty()) { > pop_subprob(); > if (subprob_worth_bothering_with()) { > sort_partition(); > push_larger_subprob(); > push_smaller_subprob(); > } > } > > they do something more like: > > push_pseudo_problem(); > while (stack_not_empty()) { > if (subprob_worth_bothering_with()) { > sort_partition(); > figure_out_next_subproblem(); > then_maybe_push_or_pop_stuff(); > } > } > > ), a medianofthree pivot selection, twoway partitioning (why couldn't > you be perfect for me?), and a minimum partition size of 4, > necessitating an insertion sort stage afterwards. > > So, yeah, no introsort in sight. Introsort would be merge sort on large > arrays, then quicksort on smaller partitions, and finally insertion sort > for the smallest partitions. I'm not sure we agree on what introsort means  normally I take it to mean doing an O(n²) algorithm with good "typical case" performance to begin with, but switching to an O(log n) algorithm with a worse constant factor as soon as it detects a risk that time will grow quadratically. Normally this is something like starting with quicksort and possibly switching to heapsort, and my understanding at the time was that glibc was doing that or something similar, and AFAIK it still is in the general case where there's insufficient memory for a merge sort. Does that sound incorrect? Rich
Powered by blists  more mailing lists