Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 31 Mar 2015 16:34:58 +0200
From: Agnieszka Bielec <bielecagnieszka8@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: [GSoC] John the Ripper support for PHC finalists

Tue, 31 Mar 2015 04:10:04 -0700, epixoip:
>So that's about 4x faster than with the autotune settings, and about 3x
slower than my CPU:

POMELO will never be fast on GPU because __local and __private memory
are limited
http://stackoverflow.com/questions/5237181/is-there-a-limit-to-opencl-local-memory
"To illustrate, on NVidia Kepler GPUs, the local memory size
is either 16 KBytes or 48 KBytes (and the complement to 64 KBytes
is used for caching accesses to Global Memory). So, on GPUs of today,
local memory is very small relative to the global device memory."
I am forced to use __global memory which is significally slower

I've reviewed the pomelo algorithm and it is impossible to refactor
the code to use small amount memory at one time because it need
to read values under random indexes of the bufor

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.