Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 14 May 2013 01:21:42 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: Incremental mode in 1.7.9.14

On 13 May, 2013, at 23:17 , Solar Designer <solar@...nwall.com> wrote:
> You could also test on fast and saltless hashes, which would
> serve to simulate a longer run against descrypt (or slower).

Yes I realized that this many salts would need a much longer test - p/s speed was only about a thousand. OTOH it did test very early candidates.

Here's a different test. It is 60s against 75K raw-md5 from the wild, with -nolog and the print of cracks commented out. Also there is a p/g figure added to output, showing cands per guess (so lower is better) and this figure should be more "reliable" for short runs (or so I thought):

1e-3:  20055g, 67326p/g 0:00:01:00 0.00% 329.5g/s 22189Kp/s 22189Kc/s 1304GC/s
0.01:  20057g, 67238p/g 0:00:01:00 0.00% 329.1g/s 22133Kp/s 22133Kc/s 1302GC/s
0.1:   20187g, 67455p/g 0:00:01:00 0.00% 331.2g/s 22345Kp/s 22345Kc/s 1314GC/s
0.5:   20199g, 67626p/g 0:00:01:00 0.00% 331.4g/s 22415Kp/s 22415Kc/s 1318GC/s
0.9:   20019g, 65721p/g 0:00:01:00 0.00% 328.5g/s 21589Kp/s 21589Kc/s 1272GC/s
1.0:   20062g, 67417p/g 0:00:01:00 0.00% 329.2g/s 22198Kp/s 22198Kc/s 1306GC/s
powi:  20079g, 67474p/g 0:00:01:00 0.00% 329.5g/s 22235Kp/s 22235Kc/s 1308GC/s
1.7.9: 20286g           0:00:01:00 0.00%                              1016GC/s

I included the complete status lines because I'm not sure I understand what I am seeing here. For example, comparing 0.001 to 0.1, the p/g figure increases, which is bad, but despite that we got more guesses. How come? Well, the GC/s and g/s speeds are higher... so is the speed somehow increased due to less switching or something like that? This makes me unsure what to really test or look for. Maybe longer runs would be much better for several reasons?

Despite 1.7.9 (unstable) running 25% slower, it does crack more hashes here. This is with exact same training as bleeding.

> And, where's the info on how this changed the position of rare one-char
> passwords in the candidates stream?  You should be looking at this as
> well since making sure those trivial passwords will get cracked early on
> was one of your goals.

Candidates needed to produce "X":
1e-3: 1708341975
0.01:  155848563
0.1:     6755610
0.5:      258490
0.9:       96768
1.0:       89697
powi:      89697
1.7.9:   7454646 (now actually testing from same training data)

I would like testing two and three chars too, especially with non-static alternatives, but it takes too long. Is there some way to cheat?


> This is interesting.  Yes, need more and longer tests.  This also gives
> us two parameters to tune: the "1" and "10" in "1 / powi(10, length)".


Obviously with "1 / powi(10, length)" we get this (assuming length "0" in the code means an actual length of one character):

length	floor
1	1.0
2	0.1
3	0.01
4	0.001
5	0.0001
...
10	0.000000001

For giving more weight to short words, my gut feeling is this is too steep. A simple and less steep function would be "1 / (1 << length)":

$ perl -e 'foreach $i (0..9) { printf("%2d%15f\n", $i + 1, 1/(1<<$i))}'
 1       1.000000
 2       0.500000
 3       0.250000
 4       0.125000
 5       0.062500
...
10       0.001953


If you can help establish what to measure (combination of p/g and C/s? Or should I simply just measure g/s over 300 seconds?), I can script tests of variants of these as well as the various static figures.

magnum


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.