Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 22 Feb 2012 01:07:04 +0100
From: Lukas Odzioba <>
Subject: Re: CUDA running on multiple cards

2012/2/21 magnum <>:
> I'm hammering you with GPU questions now :)
> What would it take to make use of multiple graphic cards in one JtR
> session? Would that need a complete re-write and a new design, or do you
> get some things "for free" or with help from the toolkit (like in OMP,
> where we just add a #pragma and voila, we have parallel processing)?
As far I know it would require  dividing work for each card what is
easy if cards are equal, and more complicated in other cases. I need
to familiarize with MPI to say more about second option.
Alex proposed some time ago using r CUDA but its license is not for
"normal" usage (every user have to register with name and insitutional
email to get copy of this software, and registration must be accepted
by someone, I doubt that john users want to do it)
> And second, isn't there any kind of resource protection? I tried running
> cRARk and then started a John session. The cRARk session died (out of
> memory) and the John session crashed.
I am not aware of anything like that.

> Could a format detect that there are multiple cards, and pick one that
> is not already used?
Generally in cuda api It is not possible(for linux, for windows we
could use nvapi), however we could:
1) use shared file/memory to reserve/figure out which gpu is already
2) measure percentage of used memory (easy to do)
3) parse nvidia-smi output
4) run empty kernel and calculate execution time

> If nothing else, I'd like CUDA to support the --gpu=N option that OpenCL
> has in JtR, if possible. cRARk lacks that option btw, it always runs on
> first card.

No problem.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.