Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 4 Nov 2016 15:31:27 +0100
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: John does not fork as many times as I want

On Fri, Nov 04, 2016 at 03:15:45PM +0100, matlink wrote:
> I tried with the Raw-SHA1 format, and the issue is still here. I tried
> to fork 30 times, but after 20 forks john tells me that it cannot
> allocate memory. Again about 200GB memory were free.

OK, so explicitly request no more than 20 until the number of remaining
hashes lets you run 40, at which point you'll restart.  Unfortunately,
there's no easy way to upgrade/restore a run with a higher number of
forked processes, so you'll have to start over at this point, but the
contents of john.pot should be preserved and will allow the 40 processes
to run.

Note that you need to leave some (possibly most) memory free to allow
for the growth of memory needs by the processes (as sharing decreases).
Thus, even if your system somehow prevents you from using all memory
right away, this may actually be a good thing.  You could possibly fight
it by adjusting the overcommit settings, but there's no need.

Here's a related thread:

http://www.openwall.com/lists/john-users/2016/09/08/5

Clearly, our approach/hack with "--fork" is not a good fit for cracking
large dumps like this, but that's what we have.

> I'm targeting the large linkedin dump that has about 250 million lines.

In that case, definitely use the simple Raw-SHA1 format.

Alexander

Powered by blists - more mailing lists

Your e-mail address:

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.