Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 29 Jul 2013 05:42:19 +0400
From: Solar Designer <>
Subject: Re: Next salt for salted hashes ?

Hi Sayantan,

On Mon, Jul 29, 2013 at 01:40:15AM +0530, Sayantan Datta wrote:
> I'm wondering what is the proper way of getting the next salt which will be
> actually loaded next time when crypt_all is called ?

"Officially", your format's code can't know that.  In practice, though,
salt->next will usually be it, so if you really want to, you may
speculatively assume this and have some fallback code for the case when
your guess turns out to be wrong.

> For example: if we have this list say salt0 - > salt1 -> salt2 -> salt3 ->
> Observation I: When the current salt is say salt0 then the next salt(salt1)
> can be obtained by doing salt->next in crypt_all but when crypt all is
> called for actual next time, it is still called again with salt0. This
> happens when the cracking starts.

Are you trying to say that although salt->next is usually what you need,
sometimes the same salt is repeated (even though salt->next is non-NULL)?

> Observation II: After salt3 , salt0 is called but salt->next is NULL. How
> do I move the pointer to beginning of the uncracked salt list which is not
> necessarily salt0 ?

No reliable way to do that, but again you may speculate that it's salt0
and include fallback code.

> Is there any way I can predict what is the actual next salt that will be
> called in next crypt_all ? I can make a dirty hack(within the format
> itself) to get around both these problems but it is better if I have a
> clean procedure for getting the next uncracked salt.

You can make a pretty good guess about the next salt.  Just not a
reliable guess.  So do include fallback code if you do that.

> I need this for async transfer of salt and its associated loaded hashes

My expectation was that format authors would cache the salts and their
associated hashes in the GPU card's (or similar) global memory the first
time these salts are seen, and then reuse that cached data (yes, maybe
updating it to remove the cracked hashes, although this is not
absolutely required).  With that, you would not need async transfers.
You'd reach full speed starting with the second set of candidate
passwords being tested.

> and bitmapas.

This is trickier, since these can get pretty large.  Arguably, though,
the combined size of bitmaps may still be affordable for you to keep all
of them in GPU's global memory.  JtR does the same thing on host now.
With salted hashes, typically either hash count for a given salt is low
(so you don't need global memory bitmaps), or the number of salts with
large hash counts is low.  While you may have e.g. 100 salts with 1M
hashes each, so you'd run out of GPU global memory for the bitmaps, this
is a relatively obscure case.

Also, async transfers might not fully solve the problem.  What if
transferring a 512 MiB bitmap takes longer than executing the kernel
does?  (Besides, you'd still need memory for 2 such bitmaps, since the
previous one may still be in use.)  Sure, you could increase GWS such
that each kernel invocation takes long enough and such that each bitmap
gets enough use before it's discarded and replaced with next salt's.
But I'm not sure if it's the way to go.

A drawback of transferring large amounts of data to the GPU is that we
become dependent on PCIe bandwidth (lane count and such) and that the
system's power consumption and heat dissipation increases (slightly).
With on-GPU hash comparisons, we're trying to avoid such reliance on
host-GPU transfers.


Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ