Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 7 Dec 2015 15:54:49 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: scrypt regression (was Re: hashcat CPU vs. JtR)

On Mon, Dec 07, 2015 at 12:58:32AM +0100, magnum wrote:
> On 2015-12-06 15:53, Solar Designer wrote:
> >>[solar@...er src]$ GOMP_CPU_AFFINITY=0-31 ../run/john -test -form=scrypt
> >>Will run 32 OpenMP threads
> >>Benchmarking: scrypt (16384, 8, 1) [Salsa20/8 128/128 AVX]... (32xOMP) 
> >>DONE
> >>Speed for cost 1 (N) of 16384, cost 2 (r) of 8, cost 3 (p) of 1
> >>Raw:    878 c/s real, 27.6 c/s virtual
> >>
> >>(BTW, I think this used to be ~960 c/s.  Looks like we got a performance
> >>regression we need to look into, or just get the latest yescrypt code in
> >>first and then see.)
> 
> When was that? I see no regression comparing to Jumbo-1.

I probably recall incorrectly.  I guess we never integrated the faster
code into jumbo - it still uses pretty ancient escrypt-lite.  We should
update to latest yescrypt code (although I have yet to finalize the
string encoding for yescrypt native hashes).

I've just tested yescrypt-0.7.1 and yescrypt-0.8.1 by editing their
userom.c to use "#define YESCRYPT_FLAGS 0" (requests classic scrypt) and
running "GOMP_CPU_AFFINITY=0-31 ./userom 0 16".  Both reported 926 c/s
with gcc 4.9.1.  Going back to RHEL6's default gcc 4.4.7 gives 945 c/s.

Availability of huge pages may also make a difference
(yescrypt-platform.c currently has HUGEPAGE_THRESHOLD at 12 MiB), but
I've just tried allocating them with sysctl and it didn't change the
numbers above on this machine.

I think I saw 960 c/s for some other revision, but I can't find it now.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.