Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Aug 2015 22:59:21 -0800
From: Royce Williams <royce@...ho.org>
To: john-users@...ts.openwall.com
Subject: handling of high numbers of single-salt descrypt hashes

I am trying to load an unusually large number of descrypt hashes (128
million), all with the same salt. (Yes, I know that this is bizarre.
:)  If the answer is "don't do that", I will be OK with that -- I can
use hashcat instead for this analysis, which can handle this job with
no problem).

I am using:

John the Ripper 1.8.0.6-jumbo-1-707-g916b74b+ OMP [linux-gnu 64-bit XOP-ac]

Doing a naive binary walk of hash counts, using a single GPU, with 128
million or a 64 million hashes, I get:

mem_alloc(): Cannot allocate memory trying to allocate -795869184 bytes


At 32 million and 16 million, I get:

OpenCL error (CL_MEM_OBJECT_ALLOCATION_FAILURE) in file
(opencl_DES_bs_f_plug.c) at line (524) - (Enque Kernel Failed)


At 8 million, I successfully start guessing, but the rate is very slow.

With all six of my GPUs, I get similar results, but some of the errors
result in john exiting uncleanly in such a way that memory usage on my
GPUs not being freed up.  I know that a reboot will clear the issue,
but I'm not sure what other methods are possible.

Again, if this kind of workload is not supported, that's no problem.
There may be an opportunity to fail more gracefully, but it's
obviously not a high priority.  One could consider this to be
exploration of a different set of boundary conditions of the
application. :)

Royce

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.