Date: Fri, 1 Feb 2013 19:48:15 +0100 From: magnum <john.magnum@...hmail.com> To: john-dev@...ts.openwall.com Subject: Re: NTLMv1 and MSCHAPv2 (was: NetNTLMv1) On 1 Feb, 2013, at 18:22 , Solar Designer <solar@...nwall.com> wrote: > On Fri, Feb 01, 2013 at 05:58:44PM +0100, magnum wrote: >> Fixed. I have also merged the same exploit and SIMD code to MSCHAPv2. That format is very similar - in JtR all main functions are identical except get_salt() so this was very wasy. Actually we should wrap them in the same source file so we don't need to incorporate the same optimizations to both formats in the future. > > Cool. > > I think a similar trick may be possible for NetLM as well - in fact, I > think that's what mudge's examples from the 1997 posting found by > Alexander were about. I just don't have time and desire to look into it > myself now - but you may. ;-) I do not think you'll see many LM c/r the wild, but just for the hell of it... btw I have become quite aquainted with the JtR internals over time but the one thing I do not have the slightest grasp of is the DES stuff. Not even the non-BS. OTOH just implementing the block 3 exploit should be trivial. I might do that. > Here's a concern about these optimizations, though: they slow down the > loading, which may be nasty if someone is cracking a very large number > of C/R pairs at once. I think the 32k DES encryptions with OpenSSL's > code on one CPU core may be taking about 10 ms, which means a loading > speed of 100 C/R pairs per second. With 1 million to load, that's 3 hours. > Is it realistic that someone will have this many? And this is twice worse now, after I implemented the block 3 DES check in valid(). I should save the result from that test, and re-use it in binary() after a sanity check. I think I'll do this asap. BTW when I implemented that, I was wondering if we could not add a late-reject for these cases: If binary() returns NULL we got a late reject. Would it be too late to efficiently handle that in loader? > Should we possibly print a warning when we determine that the number of > C/R pairs is 100 or larger? Should we provide an alternative mode - > like the code we had before? some way to invoke John to crack the 3rd > DES blocks only and record the 16-bit results for reuse by subsequent > invocations? You mean reverting to v1 of the optimization, right? > Another concern: when the number of responses per challenge is very > large, these optimizations may actually be slowing things down, because > we're no longer providing hash functions with larger than 16-bit output. > Is it realistic that someone would have millions of responses for the > same challenge? Perhaps if a fixed challenge is provided in an active > attack? The table lookup version would be better in this case too. Maybe you should ask the world in a tweat... I have no idea if anyone needs support for these cases. If we do this, I think we should start with putting mschapv2 and ntlmv1 in one same source file, with shared functions. magnum
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.