Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 19 Jun 2012 15:56:31 +0200
From: Tavis Ormandy <>
Subject: Re: [patch] optional new raw sha1 implemetation

On Tue, Jun 19, 2012 at 05:31:39PM +0400, Solar Designer wrote:
> On Tue, Jun 19, 2012 at 03:11:30PM +0200, Tavis Ormandy wrote:
> > I don't think so, the performance impact is indeed very small, 0.1%
> > sounds right. Still, I want that 0.1% :-)
> Understood.
> 0.1% was almost a worst case estimate.  How large is your actual
> max_keys_per_crypt?

At the moment, just 256. But I would like it to be much higher.

> Is your cmp_one() really as heavy as one SHA-1
> computation?

A little heavier, because I optimise for the cmp_all case. for cmp_one,
I have to unpack keys and so on.

>  Are you frequently running this against exactly 1 or 2
> loaded hashes (not 3 or more)?


> I am not sure if there's a way to reclaim that 0.1% without incurring it
> (or more) elsewhere.  For example, you can do full instead of partial
> hash comparisons in cmp_all(), but this might make its code slower
> (through differences in the code and maybe through worse locality of
> reference when full rather than partial binary hashes are in memory).

Hmm, not sure I follow. When I return a possible match from cmp_all,
john calls cmp_one on all max_keys_per_crypt. As these are quite
expensive for me as max_keys_per_crypt get's higher, and get_hash() ==
get_binary() is very cheap, it seems like an easy win to test that
before I really do the cmp_one test.

Does that sound wrong? It's not a big deal, but I don't see how it can

I mean, just this in cmp_one():

if (get_hash() != get_binary())
  return 0

return full_comparison();


------------------------------------- | pgp encrypted mail preferred

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.