Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 11 Sep 2017 11:09:28 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: Hyperthreading / fork versus mpi / instruction sets?

I'm only answering the Fork/MPI part here.

On 2017-09-10 21:40, spam@...lab.nl wrote:
> Fork vs. MPI:
> I've mentioned that there is a number of hash formats that support MPI
> and that john runs those hash types on MPI by default.

In JtR a format doesn't [need to] support MPI, node or fork. The 
(cracking) modes do. So eg. mask mode has node/fork/MPI code paths while 
the formats need no such thing.
MPI is never run by default. You usually run a wrapper called mpirun or 
mpiexec.

Formats do need own code to support multi-threading (OMP) and the latter 
is run by default. Perhaps that's what confusing you.

> Furthermore I've seen
> that forked parallel processing (--fork=n) is possible for all hash types.
> AFAIK, MPI is typically used in network connected multi-system environments.
> Forking is done on one machine. My assumption is that forking is more
> efficient than MPI because of less overhead (= faster). However MPI might
> allow more granular control, rescheduling during the cracking process to get
> maximum efficiency, but *only* useful if MPI latency is extremely low
> compared the cracking speed. My questions questions: (1) is this correct?
> Furthermore: (2) what's the best approach for fast hashes (e.g. raw-md5) and
> (3) what's the best approach for slow hashes (e.g. bcrypt)?

Node/fork/MPI is mostly the same code path. On a single host, fork is 
more efficient because it can share more memory (eg. a humongous 
wordlist or many loaded hashes) and has some inter-process signalling. 
Other than that, there's no practical difference. You simply use MPI 
where fork can not be used and that's it.

Before we got fork support, people used MPI on single hosts too. 
Nowadays that doesn't make much sense.

Network latency and speed is not important for MPI in JtR. There's no 
barriers or syncing between the processes.

magnum

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ