Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 9 Apr 2018 22:44:22 +0200
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Splitting workload on multiple hosts

On Mon, Apr 09, 2018 at 04:13:55PM -0400, Rich Rumble wrote:
> I used save-memory=2 it was going over and into swap for the 1G slices.

You'd likely achieve better speed by using --save-memory=1 and running
fewer forks.  The performance difference between 12 and 24 forks is
probably small (those are just second logical CPUs in the same cores).
The performance difference between --save-memory=1 and --save-memory=2
for large hash counts when things do fit in RAM with =1 can be large (a
few times, as it can be a 16x difference in bitmap and hash table size
and thus up to as much difference in lookup speed).  You could very well
prefer, say, 6 forks and lower memory saving over 24 forks and larger
memory saving per each.  Larger chunks and fewer forks, too.  These are
unsalted hashes, and there's little point in recomputing the same hashes
(of the same candidate passwords) for each chunk when you can avoid that
(even if by using fewer CPU cores at a time).

On Mon, Apr 09, 2018 at 04:22:17PM -0400, Stephen John Smoogen wrote:
> Are you able to use taskset to push each one to a CPU? I found that
> sometimes the kernel would shove multiple processes to the same CPU.
> This was done more by the kernel and not the process itself so taskset
> or similar tools needed to be done to get the forks off to their own
> client.

This shouldn't be much of a problem with recent kernels, except for
latency sensitive tasks which password cracking isn't, and anyway it
would be the least of Rich's worries given what he's doing.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.