Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [day] [month] [year] [list]
Date: Mon, 7 Nov 2011 02:12:05 +0400
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: larger SALT_HASH_SIZE

Jim, magnum, all -

JFYI, I am increasing SALT_HASH_SIZE from 0x400 to 0x1000.  I thought of
increasing it much further, but there are some reasons to pick 0x1000
specifically (see below), whereas making this size dynamic is a bit tricky.

So, those reasons are:

1. With the new bitslice DES code (post-1.7.8, currently in CVS), salt
setup is faster when the new salt differs from the old one in as few
bits as possible.  Our list of salts is not deliberately sorted, but our
use of a hash table at load time results in them being grouped by
salt_hash() value.  By picking the salt hash table size of 0x1000 (4096)
and having salt_hash() directly return the salt value (for 12-bit
salts), we effectively sort the salts list, which should result in a
slight speedup while cracking.

2. Anything larger than 0x1000 would waste memory in the fairly common
12-bit salts case.

3. 0x1000 is still likely to fit in modern CPUs' L1 data caches (that's
16 KB or 32 KB depending on pointer size).  When I picked 0x400 many
years ago, L1 data caches of 8 KB were very common, so I tried not to
exceed that.

So this is what I am going to do for the next release, and we might
introduce support for larger sizes later.  Except for #1 above, which
this size bump fully deals with, salt hash table size only affects load
times - not cracking speed.

In -jumbo, we can slowly proceed to revise salt_hash() functions to
actually make use of the larger salt hash table.  Whenever possible, we
should use the SALT_HASH_SIZE and SALT_HASH_LOG macros there instead of
hardcoding a specific size.

Alexander

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ