Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 20 Jan 2012 22:55:02 +0400
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: JtR OpenCL patch

On Fri, Jan 20, 2012 at 02:59:48PM +0100, Lukas Odzioba wrote:
> ukasz@...kstar$./john -test --format=cryptmd5-opencl -gpu=1
> OpenCL Platforms: 1
> OpenCL Platform: <<<AMD Accelerated Parallel Processing>>> 2
> device(s), using device: <<<Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz>>>
> Benchmarking: CRYPTMD5-OPENCL [MD5-based CRYPT]... DONE
> Raw:    15620 c/s real, 3963 c/s virtual

Does this use multiple logical CPUs or just one?

Is it like 3963 c/s per logical CPU, 15620 c/s for four logical CPUs
total?  I think Core i3-2100 has 2 cores, 4 logical CPUs total.

Actually, these numbers are quite reasonable given that your MD5 code is
far less optimal than what we normally use in JtR on CPU.  This also
shows that there's lots of room for optimization of your OpenCL kernel
for both CPU and GPU.

> Yes, OpenCL code is still much slower than CUDA. I am going to do
> something about this, target on the next month is to beat CUDA. I've
> spend some time in December trying to vectorize phpass code (using
> vector data types -uint4), but because of some bug I could not finish
> it. At the moment there is more literature about OpenCL in the
> Internet than half year ago, and making OpenCL code  faster should be
> easier.

Thank you for working on this!

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.