Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 21 Jun 2015 01:30:52 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: bcrypt-opencl local vs. private memory

On 2015-06-20 23:04, Solar Designer wrote:
> magnum, Sayantan -
>
> On Tue, Jun 16, 2015 at 05:06:40PM +0300, Solar Designer wrote:
>> Sayantan took care of this at the time, in commit
>> 97545b7ab51a4e8ddccba1a098f5448d808ae39b which includes:
>>
>> #if gpu_nvidia(DEVICE_INFO)
>> #define MAYBE_LOCAL             __private
>> #else
>> #define MAYBE_LOCAL             __local
>> #endif
>>
>> I've just tested this on our GTX 570 as well, and unfortunately it
>> actually hurts performance there.  (Even though it was helping on our
>> TITAN a bit.)  With the above commit, I am getting around 400 c/s.
>> With forced use of __local, it's around 1200 c/s.
>>
>> I merely want to document this in here.  I am not suggesting making any
>> further change yet.  Either of these speeds is quite low anyway.
>>
>> I am currently discussing bcrypt on GPU with Alain (via off-list
>> e-mail), who managed to achieve much higher speeds anyway, including on
>> his GTX 590 (per GPU).
>
> (Un)fortunately, Alain's initial results that I mentioned above were
> wrong.  So no "much higher speeds" for us.  I think we need to fix the
> original issue described above somehow.  magnum, can we possibly have
> this local vs. private bit autodetected along with GWS and LWS?

Well the bcrypt format could do so. That would be for Sayantan to 
implement. However, I just commited a workaround for now, simply using 
nvidia_sm_5x() instead of gpu_nvidia().

BTW for my Kepler GPU, I see no difference between using local or private.

magnum


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.