Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 14 Aug 2015 11:12:44 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: handling of high numbers of single-salt descrypt
 hashes

On 2015-08-14 08:59, Royce Williams wrote:
> I am trying to load an unusually large number of descrypt hashes (128
> million), all with the same salt. (Yes, I know that this is bizarre.
> :)  If the answer is "don't do that", I will be OK with that -- I can
> use hashcat instead for this analysis, which can handle this job with
> no problem).

Bizarre or not, I think we should be able to do that given sufficient 
memory on host as well as GPU.

> Doing a naive binary walk of hash counts, using a single GPU, with 128
> million or a 64 million hashes, I get:
>
> mem_alloc(): Cannot allocate memory trying to allocate -795869184 bytes

First, there was a bug in this print, outputting a size_t as signed. 
This is now fixed. But if this was a 64-bit build, I really wonder how 
it could end up that big at all? Looks like a bug.

I leave the rest for Solar and Sayantan to ponder upon.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.