Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 12 Jan 2014 14:25:41 +0500
From: Muhammad Junaid Muzammil <mjunaidmuzammil@...il.com>
To: "john-dev@...ts.openwall.com" <john-dev@...ts.openwall.com>
Subject: Re: CUDA multi-device support

Currently we have set MAX_GPU limit as 8 in both openCL and CUDA variants.
What was the reason behind it? Currently, both AMD crossfire and NVIDIA SLI
supports a maximum of 4 GPU devices.

Junaid

On Tuesday, January 7, 2014, Muhammad Junaid Muzammil wrote:

> Sure, I will post a comment at the issue tracker whichever I choose.
>
> Regards,
> Junaid
>
> On Tuesday, January 7, 2014, magnum wrote:
>
>> On 2014-01-07 16:51, Muhammad Junaid Muzammil wrote:
>>
>>> Currently, I don't have physical multi GPU access. Will have to look for
>>> some alternatives.
>>>
>>
>> Otherwise I can get temporary access to a dual-device CUDA machine in a
>> week or so, so I can do the pwsafe fixes. Feel free to pick any other issue
>> you want from https://github.com/magnumripper/JohnTheRipper/issues (just
>> tell us you'll be looking into it so we don't end up doing same things) or
>> just implement a new format of your choice (perhaps one we have as
>> CPU-only), or optimize an existing one if possible. CUDA or OpenCL, that's
>> up to you.
>>
>> BTW another thing you could try if you like, is CUDA auto-tuning. Many of
>> our OpenCL formats auto-tune their local/global worksizes at startup (and
>> we're moving towards using as much shared code as possible for that). All
>> our CUDA formats currently have hard-coded threads/blocks, defaulting to
>> figures non-optimal for high-end GPUs.
>>
>> magnum
>>
>>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.