Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 12 Jul 2012 13:50:16 +0800
From: myrice <qqlddg@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: Get hash number in cmp_all()

Solar -

On Tue, Jul 10, 2012 at 8:12 PM, Solar Designer <solar@...nwall.com> wrote:
> myrice -
>
>> I have two arrays: ld_hash and ld_salt.
>> They both have hash number size. If two hashes have same salt, the
>> salt will store twice. More specific, in cmp_all(), with each binary,
>> ld_hash[index] = binary and ld_salt[index] = current_salt. So I can
>> check the salt after crypt_all() to determine which hashes to compare.
>
> Sounds inconvenient and inefficient.  John already groups the hashes by
> salt, and you somehow undo that grouping?
>

John uses link list to store hashed and salts. However, link list is
difficult to copy to GPU and inefficient on GPU. So I un-group them to
arrays.

>> >> 1. Reallocate the memory when hash number exceeds the pre-defined
>> >> size. Is it a good idea to add mem_realloc() to memory.c?
>> >
>> > I am fine with that, but I don't see why you want to be allocating a
>> > copy of the hashes on CPU when you need that copy on GPU.  Is this just
>> > an intermediate step, before you transfer the hashes to GPU?
>>
>> Yes. I don't want to memcpy to GPU with small size of a hash.
>
> Why not, given that you only do it once per hash when you've just
> started cracking them?  Well, I understand that it could slow down
> startup significantly when you have millions of hashes loaded, but in
> that case the increase in memory consumption (even if temporary) may be
> problematic (on the other hand, many gigabytes of hashes won't fit in
> GPU memory anyway).  So maybe you'd need to transfer hashes in
> reasonably sized groups, then - e.g., 1000 at a time.
>

1000 is large to me. I mean I don't want to transfer hashes one by one
to GPU. This should not be efficient.
I have a question here, if we have many gigabytes of hashes, how can
john store them? Or john first reads reasonable number of hashes and
cracks them and move to the next chunk of hashes?

Thanks
myrice

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ