Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 4 Feb 2013 08:36:09 +0100
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: Cuda and MPI

On 3 Feb, 2013, at 20:15 , Lukas Odzioba <lukas.odzioba@...il.com> wrote:
> 2013/2/3 Sayantan Datta <std2048@...il.com>:
>> Hi all,
>> 
>> I took an online course in Heterogeneous Parallel Processing which mostly
>> involved cuda and mpi , so I was wondering if there is anything that I could
>> contribute related to these fields.
> 
> In my modest opinion right now we shouldn't spend much human resources
> on developing cuda part of JtR, instead more work on opencl is
> welcomed.
> But all of this is just my personal opinion.


I have tried running GPU+MPI just to verify it works as expected (and it does). Our MPI implementation assumes homogenous clusters though.

I have also tried running mscash2 in a little VCL cluster (v1.18). Sometimes it works, sometimes not. VCL is not 100% perfect yet but the biggest problem is our lack of GPU key generation.

What we need now is multi-device support for OpenCL and CUDA (in bleeding). We only have one experimental format for each. The option handling for --device currently does not allow giving multiple CUDA devices, that need to be fixed. One problem is we don't have a test rig with two or more CUDA devices.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.