Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 22 Nov 2017 19:28:56 +0100
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: OpenMPI and .rec files?

On 2017-11-14 11:04, Jeroen wrote:
> Hi,
> 
> If I run john with --fork on a N core machine, I'll see N .rec files for
> resuming the task. So I guess it's one .rec file per process.

That is correct. And to a large extent, the MPI code paths are exactly 
the same: It's even supported (by us/me) to resume a --fork session 
using MPI instead, and vice versa.

> I'm now playing with a OpenMPI environment, using e.g. 20 systems with 32
> processes each. When john starts, 640 processes phone home
> 
> ..
> 502 0g 0:00:00:00 ..
> 614 0g 0:00:00:00 ..
> 640 0g 0:00:00:00 ..
> ..
> 
> In total 100 .rec files are generated, where I would expect 640 or perhaps
> 20.

This sounds like either a bug or PEBCAK but it may well be a bug - I'm 
pretty sure I have never tested that many nodes at once.

> Same result for OpenMPI tasks with (more OR less than 640) AND more than 100
> subtasks.
> 
> Is all the resume data in 100 recovery files, don't matter the number of
> tasks or is there something going wrong?

You should get one session file per node. What "exact" command line did 
you use to start the job? Are all nodes running in the same $JOHN 
directory, eg. using NFS?

What happens if you try to resume such a session? It should fail and 
complain about missing files unless the bug is deeper than I can imagine.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.