Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [day] [month] [year] [list]
Date: Mon, 19 Dec 2011 00:40:25 +0400
From: Solar Designer <>
Subject: John the Ripper 1.7.9-jumbo-5; 1.7.9 for Windows re-release


There's finally a -jumbo based on 1.7.9.  The latest is currently
1.7.9-jumbo-5.  (-jumbo-1 through -jumbo-4 existed for some hours only.)

Available at the usual places are 1.7.9-jumbo-5 tarballs, as well as a
binary build for Windows, including with OpenMP support.

I've also re-released the Windows build of plain 1.7.9 correcting an
issue in cygwin1.dll that affected external mode runs (and maybe more)
with john-omp.exe (the re-release is - with the "2").

Please download these at:

Here's a summary of changes in 1.7.9-jumbo-5 made since 1.7.8-jumbo-8:

* -jumbo has been rebased on 1.7.9, thereby including the enhancements
made in the main tree since 1.7.8.  (magnum)

* Support for cracking of RADIUS shared secrets has been added.  (Didier
(currently these instructions are for Didier's original contribution,
not updated for 1.7.9-jumbo-5 specifically yet)

* Raw SHA (SHA-0) support has been added.  (magnum)

* MSSQL (old and 2005) and MySQL (SHA-1 based) hashes computation has
been optimized (these are 3 times faster on linux-x86-64i now).  (magnum)

* Lotus5 hash computation has been optimized, and optional OpenMP
parallelization has been added for it (now 12 times faster on an 8-core
linux-x86-64 machine).  (Solar)

* x86-64 builds now make use of SSE2 intrinsics for more of the hash and
cipher types.  (magnum, JimF)

* More i-suffixed make targets have been added (which use an icc-generated
assembly file for SSE2 intrinsics), including for 32-bit x86 builds
(previously, this was only available for x86-64).  (magnum, JimF)

* MD4 implementation in hand-written assembly for x86/SSE2 and MMX has
been added.  (Bartavelle, magnum, JimF)

* Assorted changes to dynamic formats (previously known as "generic MD5")
have been made.  (JimF)

* The "pix-md5" format has been converted to be a wrapper of "$dynamic_19$",
which used much faster code.  (JimF)

* An alternate implementation of NTLM hashing has been added
(--format=nt2), using Bartavelle's SSE2 intrinsics instead of Alain's
explicit assembly code.  (magnum)

* The NSLDAPS and OPENLDAPS formats have been obsoleted in favor of the
salted-sha1 format.  (magnum)

* A binary build of 1.7.9-jumbo-5 for Windows (including with OpenMP) has
been made.  (Solar)

There's a known reliability regression with HMAC-MD5, and known
reported-performance regressions with NTLM and CRC-32.  For HMAC-MD5,
you may want to use 1.7.8-jumbo-8 for now.  For NTLM there's probably
no actual slowdown, but you may try --format=nt2 and see if that is
faster (on some systems/builds, it should actually be faster).  For
CRC-32, performance does not actually matter.

Overall, this new version is faster.  Non-OpenMP on E5420 (using one CPU
core only), "make linux-x86-64i", 1.7.8-jumbo-8 to 1.7.9-jumbo-5:

Number of benchmarks:           153
Minimum:                        0.81602 real, 0.81602 virtual
Maximum:                        6.70472 real, 6.63777 virtual
Median:                         1.00484 real, 1.00384 virtual
Median absolute deviation:      0.01990 real, 0.02303 virtual
Geometric mean:                 1.10899 real, 1.10841 virtual
Geometric standard deviation:   1.33332 real, 1.33278 virtual

The 18% slowdown is for CRC-32, which does not matter, whereas there's
also a 6.7x speedup.  The geometric mean suggests an 11% overall speedup.

Same machine (2xE5420), same make target, OpenMP enabled:

Number of benchmarks:           153
Minimum:                        0.79342 real, 0.56158 virtual
Maximum:                        12.03732 real, 8.47542 virtual
Median:                         1.00202 real, 1.00271 virtual
Median absolute deviation:      0.01500 real, 0.01710 virtual
Geometric mean:                 1.16469 real, 1.06979 virtual
Geometric standard deviation:   1.57763 real, 1.35698 virtual

The worst slowdown of 20% is again for CRC-32, whereas the best speedup
is now 12x, for Lotus5.  The geometric mean suggests a 16% overall speedup.

(A few benchmark results were excluded from the comparison because of
format name and other changes.  1.7.8-jumbo-8 actually reports 160
individual results and 1.7.9-jumbo-5 reports 162, but only 153 of those
could be directly compared.)

Enjoy, and please provide your feedback on john-users.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.