Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 4 Aug 2014 15:35:42 +0000
From: Pyrex <>
To: "" <>
Subject: RE: John MPI Questions

Thank you for the information magnum!

What is the proper method to resume while utilizing MPI?

I am presently just sending:

mpirunĀ -hostfile ~/etc/mpi.hosts ~/bin/john --restore

Is this correct?

Thank You

From: magnum
Sent: Sunday, August 3, 2014 10:58 AM
Subject: Re: [john-users] John MPI Questions

On 2014-08-01 17:56, Pyrex wrote:
> First, is it possible to add additional nodes at any time? If so
> how?

No, you can only resume jobs using the same node count.

> Second, I understand that I am to send "pkill -USR2 mpirun" in order
> to force the nodes to sync before killing the session but what would
> be the proper means to kill the session? Send "pkill -USR2 mpirun"
> then Ctrl+C John? Should I wait a while before Ctrl+C? I Have had
> notes continue to run John even after the "pkill -USR2 mpirun" and
> Ctrl+C so I'd like to know more about the proper method for shutting
> down.

It's USR1, not USR2. USR2 can be used during a run to force all nodes to
re-read the pot file. Normally it shouldn't be needed - it will happen
now and then anyway.

Ideally just a ctrl-c should do fine. However, more often than not the
MPI daemons kill the nodes too brutally so they don't get to save their
session files and flush logs. To play safe you should send a USR1 and
then a second later a TERM or ctrl-c.

> Third, what is the proper way to kick off a multi cluster CUDA +
> OpenMP session? I am presently using the following for WPA-PSK
> "mpirun -hostfile ~/etc/mpi.hosts -x
> LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib:$LD_LIBRARY_PATH -n 7
> ~/bin/john-wpa/john -format=wpapsk-cuda ~/wrk/wpa-hash.wrk" I assume
> setting the "-n" value should always be equal to the number of nodes
> I have? Right now I have noticed the following effect, the hashes are
> processing faster but the CPU time is bouncing around from core to
> core. When I ran tests on my desktop it seems to use all the cores
> time more evenly.

If you have 7 nodes in that mpi.hosts file, the -n or -np option should
not be needed unless you want to run 2 nodes per host or something.

The CUDA version of WPAPSK does some post processing on CPU, that's
probably what you see. The OpenCL version does post processing on GPU.


No malware was found: Scanned by TiSnetworks E-Mail Security Systems.

No malware was found: Scanned by TiSnetworks E-Mail Security Systems.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.