Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 29 Apr 2012 02:11:45 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: New RAR OpenCL kernel

I'm currently trying to vectorize the rar format because I reckon it
might be a good thing even on scalar platforms because I do 4x the work
for 1x the potential branches and stuff. Does this make sense?

AFAIU I will *have* to use fixed-length kernels, because of how the rar
format is laid out.

I do have a vectorized version almost running now but some detail is
wrong with it (well to be specific, it segfaults :) ). Probably some
silly thing on host side, like not allocating correctly - the host code
was very fond of the idea that GWS == keys per crypt and I have probably
not found all instances of that.

magnum


On 04/26/2012 12:13 AM, Milen Rangelov wrote:
>>
>>  The funny thing is I got the same 4400 c/s anyway. It
>>> got better in theory (less suggestions from the tool) but in practice it
>>> stayed the same. For GTX580 I'm using 8192 now since higher figures
>>> don't make any difference.
>>>
>>>
> It's not that funny (you probably have the same problem as me). I have that
> problem with my progress indicator eheh. The kernel is executing so slow, I
> don't know how do you manage it in JTR but I guess it's something similar.
> I have a thread that wakes up each 3 seconds and displays the speed based
> on candidates tried in that time interval. A kernel invocation takes
> usually 1-2 seconds and it tries say a ndrange of 128*128 candidates. Well
> speed usually goes around the same 128*128*(1 or 2) value. You can't even
> measure current speed in a *nice* way. That's why I am thinking of
> introducing an "average speed" for my program, it would be much more
> realistic for cases like the rar one :)
> 


Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ