Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 28 Apr 2010 23:09:55 +0200
From: "Magnum, P.I." <>
Subject: Re: Charset problem with special characters

On 04/28/2010 05:08 PM, Pablo Catalina Gómez wrote:
> I compile john-1.7.5 with the jumbo-patch-3, and I'm trying to crack LM
> or NTLM hashes with special characters like ñ or ç. Spanish uses these
> characters a lot.
> But when I try to use a dictionary or make-charset using a spanish
> dictionary, john doesn't look up the passwords with these characters.

NTLM use UTF-16 for its input plaintexts, but john can currently only 
use 8-bit wordlists. The current code for NTLM makes the conversion by 
just inserting a null byte between every input character. The good thing 
about that is it's very fast. The downside is it will only work 
correctly for ISO-8859-1 wordlists (iirc). And subsets like ASCII of course.

You simply can't work around this without modifying john.

Such modification can be done in different ways. One way could be to 
optionally take UTF-16LE wordlist files. That would require rewriting of 
lots of stuff that terminates strings with a null byte. Another way 
could be to use real conversion from, say, UTF-8 to UTF-16LE. I made 
some experiments with this. Without some tricks and optimizations it 
will make up a HUGE performance hit. But it was easy to optimize it to 
be almost as fast as before as long as there is only ISO-8859-1 
characters (i.e. no 'escape sequence' or whatever they call it in UTF. 
Code point?)

But I never succeeded in actually cracking anything. I suspect even the 
hashing itself is so hardly optimized that it only supports 
every-other-nulls but maybe I just did something wrong?


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.