Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 9 May 2021 20:09:47 -0700
From: David Sontheimer <>
Subject: Re: Cracking stats: p/s, c/s and C/s. Hashing cost factors.

> I suggest that you upgrade to our latest code off GitHub, which will use
> 20k iterations for benchmarking sha1crypt by default.

Thank you - I appreciate updating the source code.

> For arbitrary values, you need test hashes that use those values, and
> you wouldn't use "--test" but would run cracking sessions.

This makes sense, and is an easy solution. Much appreciated.

> The number of forks and the number of hashes are unrelated to each
> other, so I don't know why you mention them in the same sentence.
Anyway, in your example each forked process generates its own batches of
> candidates and mass-compares them against all ten loaded hashes.  Yes,
> new batches of candidates will be generated until there are either no
> more candidates to generate or no more hashes left uncracked.

Ok, from your response, re-reading the Options documentation, and recalling
the output to stderr from the forks, I believe I have the division of labor

Does each fork have a fixed range of potential candidates to generate and
compare against all hashes, regardless of salt? If a subset of hashes
shares a salt, all hashes are compared against the same candidate?
Apologies as I suspect I've asked a version of this previously.

If so - this would make a lot more sense than how I had envisioned it. Glad
I'm not the one designing a cracking tool.

> Also note that you posted benchmarks for OpenMP, but are talking about
> forks here.  These behave differently.  My answer above is about forks
> since that's how your question was worded.

Yes, I'm only running cracking trials with --fork, not OpenMP.  A helpful
reminder there will be differences in c/s and C/s. Per your recommendation
above, more straightforward would be to generate my own test hashes and
crack with --fork to maintain apples-to-apples comparisons.

It sounds like you're using "--incremental=Digits".

Apologies. Incremental. I need to sleep more before I hit send.

> There is this in john.conf:
> [Incremental:Digits]
> File = $JOHN/digits.chr
> MinLen = 1
> MaxLen = 20
> CharCount = 10
> Most of the complexity is inside digits.chr, and we're just now having
> another discussion thread in here on what training set to use best, and
> how exactly, when generating those files.

Yes, I've been following along.

Here are the percentages cracked at
> > 10M, 100M, 1G, 10G, 100G candidates:
> >
> > RockYou with dupes - 4.6%, 10.2%, 20.2%, 33.3%, 48.0%
> > RockYou -1M unique - 4.7%, 11.2%, 21.5%, 35.0%, 48.3%
> > HIBP v7 cracked    - 3.2%,  8.7%, 17.8%, 30.0%, 44.5%

 From that thread... I read that you're running experiments/simulations
with a fixed number of candidates - is there an option or line in john.conf
to limit candidates in mask or incremental mode?

I don't know what a helpful answer would be here.  I suggest that you
> start with doc/MODES and then ask more specific questions.

You're right. A closer read of MODES answered my questions.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.