Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 07 Jan 2014 18:18:23 +0100
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: CUDA multi-device support

On 2014-01-07 16:51, Muhammad Junaid Muzammil wrote:
> Currently, I don't have physical multi GPU access. Will have to look for
> some alternatives.

Otherwise I can get temporary access to a dual-device CUDA machine in a 
week or so, so I can do the pwsafe fixes. Feel free to pick any other 
issue you want from https://github.com/magnumripper/JohnTheRipper/issues 
(just tell us you'll be looking into it so we don't end up doing same 
things) or just implement a new format of your choice (perhaps one we 
have as CPU-only), or optimize an existing one if possible. CUDA or 
OpenCL, that's up to you.

BTW another thing you could try if you like, is CUDA auto-tuning. Many 
of our OpenCL formats auto-tune their local/global worksizes at startup 
(and we're moving towards using as much shared code as possible for 
that). All our CUDA formats currently have hard-coded threads/blocks, 
defaulting to figures non-optimal for high-end GPUs.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.