Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 9 May 2015 10:26:33 +0300
From: Aleksey Cherepanov <lyosha@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: get_binary_*() and get_hash_*() methods

Solar,

On Thu, May 07, 2015 at 05:39:45PM +0300, Solar Designer wrote:
> > Is the db intended to be checked by crypt_all()?
> 
> For fast hashes and/or with delegation to another device (such as a
> GPU), it may be.
> 
> You can see a dirty hack like this here:
> 
> git show 9a6f4f6f69903763e664f03d2adee97486eca9de DES_bs_b.c
> 
> This patch served to move the bitmap and hash table lookups into the
> same OpenMP parallel region that computes the hashes.

The patch may be improved in the following bit of code:

  salt->index(index);

where salt->index is a pointer to one of get_hash_*() functions.

You may let compiler to inline the function. I guess that'll need a
top level switching between 7 variants of code by salt->hash_size.
Though it may bump code cache. Oh, it would be easier to use
DES_bs_get_hash(index, count) there.

Putting the code there allows to "transpose" cmp_all() to replace bit
masks, i.e. we make a function to check a result against the loaded
hashes that we store as bit vectors: there are packed bit vectors for
hashes and we traverse them comparing with bits. Pros: it is not very
hard to get 1 bit from the result of crypt_all() there. Cons: it seems
to have linear complexity, so it should be slow on high number of
hashes. Though it may be faster than bitmasks for number of hashes
lesser than number of bits in a vector. It may be the case for salted
hashes (though I guess they should be slow enough to not put the code
there at all). Did you try something like that?

Thanks!

-- 
Regards,
Aleksey Cherepanov

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.