Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Fri, 18 Jan 2019 13:21:28 +0100
From: "Jeroen" <>
To: <>
Subject: Benchmarking


I'd like to make calculate feasibility of some cracking sessions. To do
that, I'm using the single core performance of a reference CPU:

bofh@dev:/opt/JohnTheRipper-bleeding-jumbo/run$ export OMP_NUM_THREADS=1;
./john --test --format=raw-md5
Benchmarking: Raw-MD5 [MD5 256/256 AVX2 8x3]... DONE
Raw:    26776K c/s real, 26776K c/s virtual

If I do this for all algorithms, something weird happens. E.g. some salted
algorithms show higher numbers than raw formats. So I double-checked using
cracking speeds of actual hash files. I see completely different figures:

bofh@dev:/opt/JohnTheRipper-bleeding-jumbo/run$ ./john --format=raw-md5
Using default input encoding: UTF-8
Loaded 10000 password hashes with no different salts (Raw-MD5 [MD5 256/256
AVX2 8x3])
Proceeding with single, rules:Single
Press 'q' or Ctrl-C to abort, almost any other key for status
Almost done: Processing the remaining buffered candidate passwords, if any
Proceeding with wordlist:./password.lst, rules:Wordlist
Proceeding with incremental:ASCII
0g 0:00:00:26  3/3 0g/s 6783Kp/s 6783Kc/s 67860MC/s serxci..seaeak

So --test = 26776K c/s, cracking = 6783Kc/s and there's a C/s (capital C)
that's orders of magnitudes higher.


- Where are the differences in coming from?
- What's the best number to use in calculations for time predictions?



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.