Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 14 Jun 2012 23:43:49 +0200
From: magnum <>
Subject: Re: SHA-1 (was: notes on sharding the incremental search space)

On 2012-06-14 21:38, magnum wrote:
> On 2012-06-14 16:32, Frank Dittrich wrote:
>> On Thu, Jun 14, 2012 at 6:19 PM, Tavis Ormandy<>
>> wrote:
>>> p.s. I also have a sha-1 implementation that's a little faster than the
>>> jumbo version, would this be the right list to send that to? Is there a
>>> jumbo cvs repo I can checkout to patch against?
>> Probably the latest git version is considerably faster than the last
>> jumbo version.
> I was going to say "not much" but I just checked raw-sha1 and apparently
> it's 33% faster. I'm not sure how that happened, from memory the code
> changes only boosted it by like 10-12% (and this CPU does not support
> XOP or other stuff that Solar added optimisations for).

I tracked this down: I was remembering correctly but that was not 
compared to Jumbo-5 but to 80x4 [1] magnum-jumbo. The sha-1 intrinsics 
changes by Jim made about half of those 33%, and my optimisations of 
set_key() in the sha1 formats did the rest. I suppose Travis improved 
the intrinsics code so these changes may well be worth looking at, and 
compare with the 16x4 code Jim made. Even if Jim's code turns out to be 
faster, there may be some bits and pieces we can use.

[1] The 80x4 vs 16x4 refers to the SSE-2 key buffer. In the older code 
by Simon, it's 80 bytes per candidate where 16 bytes are just scratch 
space. In Jim's code, it's 64 bytes just like MD4 and MD5, and the 
scratch space is on stack.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.