Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 10 Dec 2013 01:31:49 +0100
From: magnum <>
Subject: Re: md5crypt-opencl

On 2013-12-10 01:00, Lukas Odzioba wrote:
> 2013/12/10 magnum <>:
>> Great. Would it be too much work for you to post a complete patch (assuming
>> it seems stable) with your current code first, for committing? I'd like a
>> few milestones rather than a giant untracked leap.
> No problem, tomorrow I'll retest the code and post a patch.

Great, thanks!

>> Eventually we should come up with an idea how to best (quick, simple, safe)
>> sort candidates per length. This would be a very good boost for rar-opencl
>> too but I have yet to come up with an idea simple enough to be interesting.
>> I really want it simple and beautyful or I'll leave it alone.
> What about counting/bucket sort with logic inside set_key/crypt_all.
> It should be dead simple with mostly 1 memcpy overhead. This way we
> can easily "compress" candidates too.

Yes, but the devil is (or might be) in the details. What if we need 1M 
keys to get good performance so we set KPC to 1M, but the candidates 
come in 10 different lengths? We might get just 100K keys per length. Or 
worse: 990K of one length and VERY low numbers of some other lengths. So 
should we bump KPC a *lot*? That has drawbacks. And still, your length 8 
bucket may be "full" while the length 9 bucket has too few candidates. 
At one point I played with the thought that set_key() should decide when 
to call crypt_all() - and for what candidate bucket... but that starts 
to get far too complicated.

OTOH maybe just bumping KPC to 2-5 times the target (minimum) GWS would 
solve the problem for most IRL use, despite some suboptimal kernel 
calls. Maybe we should just try that.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.