Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 14 May 2013 13:09:12 +0400
From: Solar Designer <>
Subject: Re: Incremental mode in

On Tue, May 14, 2013 at 01:21:42AM +0200, magnum wrote:
> Here's a different test. It is 60s against 75K raw-md5 from the wild, with -nolog and the print of cracks commented out.

I am simply running with "> /dev/null" for such tests.  There's not much
overhead involved in sending the 20k guesses to /dev/null; it's just
that seeing them on a terminal is nasty.

> Also there is a p/g figure added to output, showing cands per guess (so lower is better) and this figure should be more "reliable" for short runs (or so I thought):
> 1e-3:  20055g, 67326p/g 0:00:01:00 0.00% 329.5g/s 22189Kp/s 22189Kc/s 1304GC/s
> 0.01:  20057g, 67238p/g 0:00:01:00 0.00% 329.1g/s 22133Kp/s 22133Kc/s 1302GC/s
> 0.1:   20187g, 67455p/g 0:00:01:00 0.00% 331.2g/s 22345Kp/s 22345Kc/s 1314GC/s
> 0.5:   20199g, 67626p/g 0:00:01:00 0.00% 331.4g/s 22415Kp/s 22415Kc/s 1318GC/s
> 0.9:   20019g, 65721p/g 0:00:01:00 0.00% 328.5g/s 21589Kp/s 21589Kc/s 1272GC/s
> 1.0:   20062g, 67417p/g 0:00:01:00 0.00% 329.2g/s 22198Kp/s 22198Kc/s 1306GC/s
> powi:  20079g, 67474p/g 0:00:01:00 0.00% 329.5g/s 22235Kp/s 22235Kc/s 1308GC/s
> 1.7.9: 20286g           0:00:01:00 0.00%                              1016GC/s

So 0.9 is best in terms of p/g, but it results in lower p/s presumably
because of more frequent length switching during this minute.  It is
puzzling that 1.7.9 performed better than the new code.

> I included the complete status lines because I'm not sure I understand what I am seeing here. For example, comparing 0.001 to 0.1, the p/g figure increases, which is bad, but despite that we got more guesses. How come? Well, the GC/s and g/s speeds are higher... so is the speed somehow increased due to less switching or something like that?

Yes, I think so.

> This makes me unsure what to really test or look for. Maybe longer runs would be much better for several reasons?

Yes, longer runs.

Actually, we should be looking at results achieved at different points
in time, and also (separately) with both time and candidates tested on
the X axis.

> Despite 1.7.9 (unstable) running 25% slower, it does crack more hashes here. This is with exact same training as bleeding.

This is unexpected and troubling - we don't want to be making things
worse than what we had before.  Previous testing had demonstrated the
new code working much better - e.g.:

This is not exactly the same version, but what we currently have should
be similar to "contest" in such tests.

> Candidates needed to produce "X":
> 1e-3: 1708341975
> 0.01:  155848563
> 0.1:     6755610
> 0.5:      258490
> 0.9:       96768
> 1.0:       89697
> powi:      89697
> 1.7.9:   7454646 (now actually testing from same training data)

Thanks!  Based on this, I think 0.1 to 1.0 is the range to consider for
length 1.

What about candidates needed to reach the least frequent single letter?
Or rather, to have all single-letter passwords already produced?

> I would like testing two and three chars too, especially with non-static alternatives, but it takes too long. Is there some way to cheat?

Perhaps, but there's nothing I can readily recommend.

> For giving more weight to short words, my gut feeling is this is too steep. A simple and less steep function would be "1 / (1 << length)":
> $ perl -e 'foreach $i (0..9) { printf("%2d%15f\n", $i + 1, 1/(1<<$i))}'
>  1       1.000000
>  2       0.500000
>  3       0.250000
>  4       0.125000
>  5       0.062500
> ...
> 10       0.001953

Yes, I think this is better - or maybe "1.0 / (2 << length)" (so start
at 0.5 for length = 0, which actually means length 1).

> If you can help establish what to measure (combination of p/g and C/s? Or should I simply just measure g/s over 300 seconds?), I can script tests of variants of these as well as the various static figures.

Perhaps you could use AutoStatus external mode to have status printed
after e.g. 1000, 1M, 1G, 10G, 100G candidates?  At ~20M raw c/s, it'd
take you 1.5 hours to get to 100G, but that's the speed on one CPU core,
so you'll run several such tests in parallel (perhaps for all setting
you want to test/compare).

Instead of 1.7.9, maybe use bleeding from just prior to introduction of
the new incremental mode.  You'll have the same bitmaps code then.



Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ