Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 27 Feb 2007 17:39:46 +0300
From: Solar Designer <>
Subject: Re: Splitting the Charset into chunks

On Mon, Feb 26, 2007 at 09:32:08PM +0100, Pablo Fernandez wrote:
> I'm trying to find a way to split any charset into a given (very high) number
> of disjoint chunks and then create a new charset file with each chunk.

While there's a way to split a charset file like that, exactly the same
effect is better achieved by having the code in inc.c skip some order[]
entries.  The number of chunks won't be very high, though - there will
be just a few hundred of reasonably large ones (with order[] indices of
around 2000 for the default CHARSET_* parameters).

There's no better way to split the charset files, assuming that the code
in JtR remains unchanged; even if you would drop some characters from
some character positions, those characters would be re-added with
inc.c: expand(), so your different nodes would eventually be trying the
same candidate passwords and they would do so in a less optimal order.

> Maybe the code used to create the default ones may help? Is it available?

Yes, the code is available - it's just the --make-charset option to JtR
and the corresponding code is found in charset.c.

> Also I found it very difficult to understand the insights of a charset file,
> I have seen no documentation about it in the code. Is there any?

It's just the comments in charset.h.

Alexander Peslyak <solar at>
GPG key ID: 5B341F15  fp: B3FB 63F4 D7A3 BCCC 6F6E  FC55 A2FC 027C 5B34 1F15 - bringing security into open computing environments

Was I helpful?  Please give your feedback here:

To unsubscribe, e-mail and reply
to the automated confirmation request that will be sent to you.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.