Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 25 May 2011 22:40:12 -0500
From: "JFoug" <jfoug@....net>
To: <john-dev@...ts.openwall.com>
Subject: Re: BSDI poor in OMP4/OMP7 when doing real work

Would things be sped up, by placing all candidates of a specific salt into a 
single CPU/Thread ?  Thus if there are 4000 salts, and this is spead over 4 
CPU, then each CPU would be indpendantly working on 100 salts.

Jim.

----- Original Message ----- 
From: "Solar Designer" <solar@...nwall.com>
To: <john-dev@...ts.openwall.com>
Sent: Wednesday, May 25, 2011 3:38 PM
Subject: Re: [john-dev] BSDI poor in OMP4/OMP7 when doing real work


> On Wed, May 25, 2011 at 10:18:38PM +0200, magnum wrote:
>> What I don't understand though is why is the reported c/s is so high?
>
> That's because John actually tries lots of candidate passwords - a lot
> more than it does when it is not parallelized, because in that case the
> correct passwords are detected sooner.
>
> Sequential algorithm:
>
> for each candidate {
> for each salt {
> hash the candidate
> compare the hash against loaded hashes
> remove any matching hashes
> remove the salt if it no longer has hashes
> }
> }
>
> Parallel algorithm:
>
> for each bunch of candidates {
> for each salt {
> hash the candidates
> compare the hashes against loaded hashes
> remove any matching hashes
> remove the salt if it no longer has hashes
> }
> }
>
> In your case, every bunch of candidates probably has just one correct
> password for each salt.  Yet the entire bunch is hashed.  So the larger
> the bunch is, the worse your actual speed is.  But not the c/s rate,
> because this many combinations per second are actually tested.
>
> If you don't have that many correct passwords in your candidates stream,
> then the c/s rate will reflect the actual performance much better.
>
> BTW, this is not limited to OpenMP; for many hash types, we're hashing
> more than one candidate password at a time even with single-threaded
> builds - to increase instruction level parallelism, to make use of SIMD,
> to make more efficient use of caches, to save on function call overhead.
> However, it's with OpenMP when the number of candidates to hash at once
> may become huge.  Oh, and even more so with the GPU patches.
>
> Alexander 

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.