Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 13 Jan 2014 09:19:37 +0500
From: Muhammad Junaid Muzammil <mjunaidmuzammil@...il.com>
To: "john-dev@...ts.openwall.com" <john-dev@...ts.openwall.com>
Subject: Re: CUDA multi-device support

Thanks for the info. Previously, I wasn't thinking in terms of
virtualization. With the frameworks like DistCL etc, devices over a
cluster/cloud can be accessed as a native device.


On Sun, Jan 12, 2014 at 8:08 PM, magnum <john.magnum@...hmail.com> wrote:

> On 2014-01-12 14:16, Jeremi Gosney wrote:
>
>> On 1/12/2014 1:25 AM, Muhammad Junaid Muzammil wrote:
>>
>>> Currently we have set MAX_GPU limit as 8 in both openCL and CUDA
>>> variants. What was the reason behind it? Currently, both AMD crossfire
>>> and NVIDIA SLI supports a maximum of 4 GPU devices.
>>>
>>
>> This is not very sound logic, as one does not use Crossfire nor SLI for
>> GPGPU. In fact, this technology usually must be disabled for compute
>> work. fglrx supports a maximum of 8 devices, and afaik nvidia supports
>> 16 devices, if not more. So 16 would likely be a more sane value.
>>
>>
> Right. And that's local ones. With VCL/SnuCL/DistCL you can have a lot
> more so oclHashcat supports 128 devices.
>
> I intend to add a file ```common-gpu.[hc]``` for shared stuff between CUDA
> and OpenCL, eg. temperature monitoring. When I do that I will merge
> MAX_CUDA_DEVICES and MAX_OPENCL_DEVICES to one same MAX_GPU_DEVICES so
> they'll always be the same. And I'll probably set it as 128.
>
> magnum
>
>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.