Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 8 Sep 2016 20:20:47 +0200
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: JtR, MPI and CUDA+CPU core usage?

On 2016-09-08 11:54, Darren Wise wrote:
> Thank you very much for getting back to my question magnum, I will use OpenGL then rather then CUDA directly :)

Great, but it's OpenCL, not OpenGL. The latter is a different beast.

> Can I just confirm with you because it was a little unclear to me..
> Using CUDA I cannot use CPU cores and GPU cores together unless a spawn multiple jobs..
> Using OpenCL I can use both CPU cores and GPU cores, but per GPU card it will be with the addition of 1 extra process..

Regardless of whether your'e using CUDA or OpenCL, you can't do GPU and 
CPU in one single job. If you say --format=nt it will be CPU, if you say 
--format=nt-opencl (or nt-cuda if we had one) it will be GPU.

Then again, you could install OpenCL drivers for all your CPU cores too 
and run them as OpenCL devices but that's a different story and usually 
ends up slower than running our "CPU formats".

> I.e: 48 CPU cores and 1 GPU card I would write -n 49 instead :)

No. Lets say you have host Foo which has 16 CPU cores and 1 GPU, and 
host Bar and Baz which has 16 CPU cores each. You'd start two jobs, eg:

$ mpirun -host foo -np 1 ./john --format=nt-opencl --session=gpu (...)

and

$ mpirun -host foo,bar,baz -np 48 ./john --format=nt --session=cpu (...)

I hope you get the picture. You'd probably want to run mask mode (and/or 
wordlist + mask) on the GPU and eg. wordlist+rules and other things on 
the CPUs. At least for the fastest formats, mask mode (hybrid or not) is 
needed on GPU to achieve good speed (as in 100x faster than a CPU core).

magnum


> Thank you very much magnum :D Lovely to meet you as well :D
>
>
>> Kind regards,
>> Darren Wise Esq,
>> B.Sc, HND, GNVQ, City & Guilds.
>
>
> -------- Original message --------
> From: magnum <john.magnum@...hmail.com>
> Date: 07/09/2016  20:45  (GMT+00:00)
> To: john-users@...ts.openwall.com
> Subject: Re: [john-users] JtR, MPI and CUDA+CPU core usage?
>
> On 2016-09-07 08:29, Darren Wise wrote:
>> I've got a bit of a silly question here folks, nothing I have actually tried yet..
>> I have an MPIEXEC install of JtR, 10 nodes (48 CPU cores) I literally have just plonked in my first CUDA card and not even powered it on yet to install the nVidia drivers.
>> I am a little concerned, I will reinstall JtR on my MPIserver which uses Ubuntu 14.4LTS server install.. Of which I will include the flags for CUDA support...
>> I know this sounds really really stupid, but when as I have done launch -n 48 (48 threads to run on 48 CPU cores) do I now have to spawn -n (number of CPU and total number of GPU cores)
>
> Using CUDA, you'll not be able to use CPU's and GPU's in a single job.
> You should run one MPI process per GPU *card*. You can start another job
> using the remaining CPU cores on those machines plus all cores on
> machines that lack a GPU.
>
> I recommend using OpenCL (even for nvidia), not CUDA. Our OpenCL formats
> are way ahead of the CUDA ones, in number and in quality.
>
> magnum
>

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.