Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 9 Jul 2012 11:18:00 +0800
From: myrice <>
Subject: Re: About very high memory usage

On Mon, Jul 9, 2012 at 10:14 AM, Solar Designer <> wrote:
> 42K c/s?  Is that like a thousand times slower than normal, and a lot
> slower than a CPU as well?  If so, that's not really usable.  Maybe you
> have some loops that go until your max_keys_per_crypt regardless of the
> actual number of keys?  If so, can you adjust those to go until the
> actual number without incurring much performance impact on supporting
> this variable count?

No, I have already reduced the actually number of keys. I make the
count in crypt_all() as actually number of keys crypted. I think GPU
cannot efficiently support small size especially the fast hash. I
think the "faster" the slower. The xsha512-cuda takes 58% time on GPU
and others on CPU. Take the raw-sha256-cuda as an example, I believe
it takes shorter time on GPU than xsha512-cuda, so the performance is

> Also, 200 MB for just a thousand of hashes (and salts) is still a lot,
> considering that it is reasonable to run John on millions of hashes at
> once (this is often being done).
> Maybe we need to declare single crack mode as unusable with many GPU
> formats, but supporting lower/variable keys_per_crypt is desirable
> anyway (e.g., if running with a small wordlist).

Now, one idea is reduce to min to 1 and use CPU for single mode. Or we
dynamically detect the key number, if it small than a size - with this
size, CPU is more efficient- we use CPU instead.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.