[<prev] [next>] [thread-next>] [day] [month] [year] [list]
```Date: Mon, 19 Sep 2011 21:13:08 +0200
From: Pablo Fernandez <pfernandez@....org>
To: john-users@...ts.openwall.com
Subject: What does the dummy bench actually mean?

Dear community,

If you are in a rush, go straight to the last paragraph. Otherwise, this is
where the question comes from:

I am building up some statistics about a filter I have recently written to
allow John to work on a block of candidates. It's not very fancy, just skips
the candidates until it finds the first boundary (of the block) and makes john
finish when it reaches the second boundary. It is meant to be used together
with the incremental mode.

The numbers I get in reality are quite faster than the ones I calculated
theoretically, using the benchmark --test provides, using 1, 10 and 50
different salted paswords (MD5 in this case, but I get the same weird results
for DES). Let me give you an example.

In the benchmark I get:
- MD5: 11.000 c/s
- dummy: 20.065.000 c/s

Theoretically, I make the following three cases calculations:
- 50 salts, 1st candidate to try: 6.553.600.000
waiting time until first candidate hash compute:
6.553.600.000 / 20.065.000 = 326 seconds (theoretically)
In practice, I get about 310 seconds, inside error margins. OK.

- 10 salts, 1st candidate to try: 32.768.000.000
waiting time until first candidate hash compute:
32.768.000.000 / 20.065.000 = 1633 seconds
In practice, I get about 1100 seconds. This is quite strange already.

- 1 password, 1st candidate to try: 327.680.000.000
waiting time until first candidate hash compute:
327.680.000.000 / 20.065.000 = 16330 seconds
In practice, I get about 10.100 seconds. The difference is huge.

I am quite sure the compute time of the hash is very accurate, and similar to
the one I get in the benchmark, because I run the tests with much smaller
numbers, and it all fits very well. So, the problem must be on the dummy
benchmark. In fact, my tests should be slower, because I introduce a filter to
the normal "go-to-the-next-candidate" time that incremental mode has, but they
are not.

I actually made some more calculations, and the dummy benchmark should be 35 M
c/s instead of 20 M, and then all my tests are a bit slower than the
theoretical calculation, and everything makes sense.

How is the dummy benchmark calculated? I see the code, it seems simple, but
don't really understand it.
Does it actually correspond to the time it takes John to go to the next
candidate? My guess it that NO, that I am making a mistake here... so, is
there a way to measure the original time to "go-to-the-next-candidate" without
filters, so that I can check the efficiency of my filter?
I have tried ./john -i --stdout > /dev/null and the times I get are quite
similar to the dummy benchmark (a bit slower, which makes sense for the use of
/dev/null), maybe that's where my confusion comes from (if any).

Thanks!!
Pablo

```