Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 8 Aug 2013 23:42:57 +0400
From: Solar Designer <>
Subject: Re: Sayantan's Weekly Report #7


On Wed, Aug 07, 2013 at 08:59:26AM +0530, Sayantan Datta wrote:
> Loading large number of hashes means that it takes more time to check those
> hashes rather than generating them.

This shouldn't be the case, at least not for sane hash counts (like
millions rather than billions).  For example, according to the figure
bartavelle got from atom, oclHashcat achieves 5400M c/s when running on
100k raw MD5 hashes:

This is almost the same speed that it achieves when running on just one
loaded hash.  (oclHashcat-lite achieves a higher speed, but that's for a
different reason.)

IIRC, myrice's PG-test branch from last year achieved around 3000M c/s
when running on 1M or 10M loaded hashes.

> I have few plans such as using local
> memory to store as many hashes as possible, should easily store 2048 hashes
> or even more depending on how many bytes per hash is being stored. Then
> cache them on the fly if more hashes are loaded than what could be stored
> in local memory. Or use the global memory entirely because caching will
> have some overhead too.

You keep ignoring the advice that we're giving you. :-(

Please do take a look at and try running myrice's PG-test code.  Please
also take a look at and try running New-Multiforcer:

I don't know about hashkill, but I guess it contains bitmaps for fast
hashes too.  Milen, can you comment?


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.