Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 27 Aug 2013 21:24:33 -0400
From: Rich Felker <>
Subject: Re: Optimized C memset [v2]

On Wed, Aug 28, 2013 at 12:05:43PM +1200, Andre Renaud wrote:
> Hi Rich,
> On 28 August 2013 04:22, Rich Felker <> wrote:
> > Here's version 2 (filename version 6, in honor of glibc ;) of the
> > memset code. I fixed a bug in the logic for coverage of the tail (the
> > part past what's covered by the loop) for some values of n and
> > alignments, and cleaned up the __GNUC__ usage a bit to use less
> > #ifdeffery. The remaining test at the top for the __GNUC__ version is
> > ugly, I admit, and should possibly just be removed and replaced by a
> > configure check to add -D__may_alias__= to the CFLAGS if the compiler
> > defines __GNUC__ but does not recognize __attribute__((__may_alias__))
> > -- opinions on this?
> Can you explain the algorithm a bit - I can't entirely follow the us
> of negation/masking, but it looks like at the end you're doing a loop
> of 64-bit aligned writes, but I don't see how it can work if the tail
> end ends in something that isn't 64-bit aligned? Is this assuming that
> unaligned writes will work ok?

See the version I committed a couple hours ago. It has comments added.
The basic thing you're missing is that the code before the loop fills
from both the beginning and the end, not just the beginning. This
allows for a really effective O(log n) branch strategy to fill n
bytes: essentially, knowing n>=k allows you to fill up to 2*k bytes:
0,1,...,k-1 and n-1,n-2,n-3,...,n-k. If n<2*k, some of these will
overlap, but it doesn't matter.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.