Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 05 Aug 2015 12:29:50 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: CUDA support

On 2015-08-05 09:14, Marek Wrzosek wrote:
> What about the informations in README-CUDA file? How old are they? Are
> they still relevant? In particular the "OpenCL vs. CUDA parlor:"
> paragraph. I don't have NVidia card, but I'm curious. Does all OpenCL
> formats auto-tune to the same LWS and GWS values? Are CUDA is still
> faster when THREADS and BLOCKS are changes accordingly?

All our OpenCL formats auto-tune for (semi) optimal workgroup size 
figures. The figure vary by card and format, if you end up with some 
figure for WPA, it wont likely be any good for NT.

I do not think any CUDA format is significantly faster nowadays and if 
anyone is, we will be able to improve the OpenCL format. Here's a 
md5crypt comparison with latest code:

$ ../run/john -test -form:md5crypt-opencl -dev=2
Device 2: GeForce GT 650M
Benchmarking: md5crypt-opencl, crypt(3) $1$ [MD5 OpenCL]... DONE
Raw:	216647 c/s real, 26214K c/s virtual

Using -v=4 I see Local worksize (LWS) 64, global worksize (GWS) 65536. 
For CUDA that means threads=64 and blocks=1024. Edited, built and try:

$ ../run/john -test -form:md5crypt-cuda
Benchmarking: md5crypt-cuda, crypt(3) $1$ [MD5 CUDA]... DONE
Raw:	121362 c/s real, 121362 c/s virtual

The example from the doc seems to be obsolete after improvements to the 
OpenCL format.

> Is git "complaining" a lot during pulling when there are changes made
> by a user in cuda_*.h files?

Only if a locally changed file was also changed upstream. Since no-one's 
maintaining these formats, this does not happen...

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.