Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 23 Nov 2017 20:34:47 +0100
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: OpenMPI and .rec files?

On 2017-11-23 20:22, Jeroen wrote:
> magnum wrote:
> <SNAP>
>> So now we have two questions:
>>   1) Why was it already locked? Some half-dead process still running?
>>   2) Why do you not see the error printed to stderr? Something with your
>> OpenMPI wrapper script?
> 
> 1) Messages are send to a logfile per process. The message were all generated by a few boxes, seems like hanging / non-terminated processes, killed them.
> 2) Logs can be saved on a per worker basis or send to the console with a verbose flag.
> 
> New data 1:
> - New john.log.
> - Test with > 100 tasks and incremental mode
> - Everything works as expected: N .rec files where N > 100.
> - Stopped and resumed the task with --restore: ok.
> - No weird log entries.
> 
> New data 2:
> - New john.log.
> - Test with > 100 tasks and --wordlist=<dict>.
> - 100 .rec files.
> - Stopped and resumed the task with --restore: relevant messages:
>     - "Continuing an interrupted session"
>     - "No crash recovery file, terminating": count of this entry is total N - 100.

So you only see the error when using wordlist. What are the effective 
ulimits? Perhaps try "ulimit -a" and see what happens?

Or, taking the opposite approach: What if you run incremental and N is 
larger than, say, 200?

Perhaps also look for errors/warnings in the NFS server log...

magnum

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ