Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 19 Apr 2012 09:53:41 +0200
From: Simon Marechal <simon@...quise.net>
To: john-users@...ts.openwall.com
Subject: Re: automation equipped working place of hash cracker,
 proposal

On 19/04/2012 02:03, Aleksey Cherepanov wrote:
> On Wed, Apr 18, 2012 at 11:35:23PM +0200, Frank Dittrich wrote:
>> > On 04/18/2012 10:27 PM, Aleksey Cherepanov wrote:
>>> > > On Mon, Apr 16, 2012 at 10:52:30AM +0200, Simon Marechal wrote:
>>>> > >> If I was to design this, I would do it that way :
>>>> > >> * the server converts high level demands into low level job units
>>>> > >> * the server has at least a network API, and possibly a web interface
>>>> > >> * the server handles dispatching
>>> > > 
>>> > > I think the easiest way to split cracking task into parts for distribution is
>>> > > to split candidates list, to granulate it: we run our underlying attack
>>> > > command with '--stdout', split it into some packs and distribute that packs to
>>> > > nodes that will just use packs as wordlists. Pros: it is easy to implement, it
>>> > > is flexible and upgradabl, it supports modes that we don't want run to the end
>>> > > like incremental mode, all attacks could paralleled as such (if I am not
>>> > > wrong). Cons: it seems to be suboptimal, it does not scale well (candidates
>>> > > generation could become bottleneck, though it could distributed too),
>> > 
>> > I'm afraid network bandwidth will soon become a bottleneck, especially
>> > for fast saltless hashes.
> If we take bigger packs of candidates then they could be compressed well. So
> we trade off network with cpu time.

With N clients, you will need to generate, compress and send your
candidates N times faster than the average client can crack. During the
contest you might have way more than 500 cores on your hands.

Even for a moderately slow use case this will not work : for a 10 minute
single salt md5-crypt job, it will require 10213800 passwords on my
computer (for a single core, not 8xOMP). Generating the passwords, and
compressing them with gzip takes 5.67s. It produces a 25MB file. With
lzma it 36.05s and produces 5.7MB file. With 8 cores on the master, you
will be able to server 850 cores with gzip, 133 with lzma. You will also
need to upload at 35MB/s with gzip and 1.26MB/s with lzma.

When using a master/slave setup I had to rewrite a part of it because
the bottleneck was the hash list compilation and distribution.
Compilation was taking a whole second (I used a mysql database to store
everything, which resulted in a large select everytime. The second was
spent mostly in data transfert.), then download time varied between
clients. Also, I had to rewrite all IO functions so that the code was
asynchronous and all streams were compressed.

IO congestion is a very serious problem. Here I had congestions between
master and SQL server, and between client and server. I only had about
200 clients. This is the reason I believe that as much logic as possible
should be on the client side (the side that scales easily) and that all
kind of caches should be implemented.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.