Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 15 Sep 2015 17:50:54 +0300
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: Judy array

On Tue, Sep 15, 2015 at 06:28:17AM -0700, Fred Wang wrote:
> So, given the existing implementation does not lend it self well to large-scale (read: large unsolved lists) processing, and the current leaning away from tightly-coupled multiprocessing, is my approach something the John developers are interested in?

Yes.  And we are not necessarily "leaning away from tightly-coupled
multiprocessing" - it's just that we're making evolutionary changes to
the existing codebase rather than rewriting it, and we also intend to
continue to support and enhance non-tightly-coupled multiprocessing
(especially as needed for distributed processing).

What I am considering as the next step towards optional tightly-coupled
multiprocessing is having the forked processes keep the password hash
database in a shared address space region.  This idea has been on the
back burner since I first introduced --fork, and your testcase has just
reminded me of it.  With so many of the passwords getting cracked, the
memory usage increase from copy-on-write is just too high.

> I can certainly pull bits of my code into John, integrate it, and give you something to try.

This would be a welcome contribution.  Thanks!

> I continue to suggest, however, that moving to a threaded mode, rather than fork, would be a far better performance overall.

For some usage scenarios, yes.  I think the mixed approach I suggested
above should achieve almost the same performance, except that the forked
processes will continue to terminate at slightly different times, losing
a few seconds near the end of the current test runs.  A further change
may be to have them reallocate portions of work between themselves
(we'll also need this for distributed processing).

If I were writing this from scratch, I would do it differently, but at
this time I am looking into improving what we've got.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.