Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 4 Jun 2015 20:13:20 +0200
From: Agnieszka Bielec <bielecagnieszka8@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: PHC: Lyra2 on CPU

2015-06-02 21:34 GMT+02:00 Solar Designer <solar@...nwall.com>:
> On Tue, Jun 02, 2015 at 07:26:52PM +0200, Agnieszka Bielec wrote:
>> >> I think that method b) is slower because we are using synchronization
>> >> many times and we have barriers  for all omp threads.
>> >
>> > Maybe.  You can test this hypothesis by benchmarking at higher cost
>> > settings and thus at lower c/s rates.  At lower c/s rates, the
>> > synchronization overhead becomes relatively lower.
>>
>> I choose only the biggest noticed speeds for tests:
>> ; 8896/9144
>>         ~0.97287839020122484689
>> ; 2312/2368
>>         ~0.97635135135135135135
>
> Can you explain these numbers?  I guess these are for two code versions
> and two cost settings (that differ by a factor of 4), and you show that
> the overhead has stayed the same.  Correct?  Yet I'd like more specifics
> on what you benchmarked here.

right: version c)
left: version b)
up: cost=8,8
down: cost=16,16

overhead isn't the same

>> > If confirmed, a way to reduce the overhead at higher c/s rates as well
>> > would be via computing larger batches of hashes per parallel for loop.
>> > This is what we normally call OMP_SCALE, but possibly applied at a
>> > slightly lower level in this case.
>>
>> lyra2 hash uses barriers in one hash computation so I'm not sure,
>> maybe I don't understand your point
>
> We have multiple hashes to compute, so even if one instance of Lyra2 has
> barriers within it we can increase the amount of work done between
> barriers by bringing our higher-level parallelism (multiple candidate
> passwords) down to that level (to inside of Lyra2 implementation).

but don't we need more memory to do this?

>> > While we're at it, have you moved the memory (de)allocation out of the
>> > cracking loop?  And have you done it for POMELO too, as we had
>> > discussed - perhaps reusing the allocators from yescrypt?  I don't
>> > recall you reporting on this, so perhaps this task slipped through the
>> > cracks?  If so, can you please (re-)add it to your priorities?

I implented for lyra b) and c), a) is the worst and I won't be working on a)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.