|
|
Message-ID: <alpine.LNX.2.20.13.2103091649540.16269@monopod.intra.ispras.ru>
Date: Tue, 9 Mar 2021 17:13:39 +0300 (MSK)
From: Alexander Monakov <amonakov@...ras.ru>
To: musl@...ts.openwall.com
cc: Érico Nogueira <ericonr@...root.org>
Subject: Re: [PATCH v2] add qsort_r.
> On Tue, Mar 09, 2021 at 12:11:37PM +0300, Alexander Monakov wrote:
> > On Tue, 9 Mar 2021, Érico Nogueira wrote:
> >
> > > since most discussion around the addition of this function has centered
> > > around the possible code duplication it requires or that qsort would
> > > become much slower if implemented as a wrapper around qsort_r
> >
> > How much is "much slower", did anyone provide figures to support this claim?
> > The extra cost that a wrapper brings is either one indirect jump instruction,
> > or one trivially-predictable conditional branch per one comparator invocation.
>
> Quite a bit I'd expect. Each call to cmp would involve an extra level
> of call wrapper. With full IPA/inlining it could be optimized out, but
> only by making a non-_r copy of all the qsort code in the process at
> optimize time.
>
> > Constant factor in musl qsort is quite high, I'd be surprised if the extra
> > overhead from one additional branch is even possible to measure.
>
> I don't think it's just a branch. It's a call layer. qsort_r internals
> with cmp=wrapper_cmp, ctx=real_cmp -> wrapper_cmp(x, y, real_cmp) ->
> real_cmp(x, y). But I'm not opposed to looking at some numbers if you
> think it might not matter. Maybe because it's a tail call it does
> collapse to essentially just a branch in terms of cost..
First of all it's not necessarily a "call layer".
You could change cmp call site such that NULL comparator implies that
non-_r version was called and the original comparator address is in ctx:
static inline int call_cmp(void *v1, void *v2, void *ctx, cmpfun cmp)
{
if (cmp)
return cmp(v1, v2, ctx);
return ((cmpfun)ctx)(v1, v2);
}
This is just a conditional branch at call site after trivial inlining.
Second, if you make a "conventional" wrapper, then on popular architectures
it is a single instruction (powerpc64 ABI demonstrates its insanity here):
static int wrapper_cmp(void *v1, void *v2, void *ctx)
{
return ((cmpfun)ctx)(v1, v2);
}
Some examples:
amd64: jmp %rdx
i386: jmp *12(%esp)
arm: bx r2
aarch64:br x2
How is this not obvious?
Alexander
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.