Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 01 Jun 2015 16:47:20 +0200
From: magnum <>
Subject: Re: Bleeding jumbo now defaults to UTF-8

On 2015-06-01 15:01, Marek Wrzosek wrote:
> Even single-language wordlists have words without any language-specific
> letters and they would be repeated unchanged if someone will run john
> several times with different --target-encoding. So the other workaround
> is to separate ASCII-only passwords from those UTF-8 wordlists and make
> ASCII wordlist and then from other passwords (passwords with at least
> one non-ASCII character) and make wordlist for cracking with different
> --target-encoding. Separating Russian passwords was easy task. Is there
> a simple way to make these wordlists for e.g. German or French or
> "iso-8859-1 part" of all.lst_utf8? How would grep command look like to
> achieve this?

You can do a try-catch in Perl (actual command is 'eval' iirc). Pseudo-code:

For each UTF-8 line of input {
	skip any pure ASCII
	try encoding to CP1234
	if it worked, print it

Unless you need this a lot you shouldn't create new files (they only add 
a burden of maintenance). Just write this as a simple filter  where 
actual encoding would be a command-line option, and feed it to john

$ ./john -w:all.utf8.lst -rules:whatever hashfile
$ <all.utf8.lst -t cp1234 | ./john -pipe -enc:cp1234 
-rules:whatever hashfile
$ <all.utf8.lst -t cp1235 | ./john -pipe -enc:cp1235 
-rules:whatever hashfile

Let me see if I can whip up an actual implementation of that filter in 
Perl. I'll be back.


Powered by blists - more mailing lists

Your e-mail address:

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.