Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Mar 2021 21:14:10 +0100
From: magnum <john.magnum@...hmail.com>
To: john-users@...ts.openwall.com
Subject: Re: Multi-gpu setup

On 2021-03-10 20:25, MichaƂ Majchrowicz wrote:
>> As a cleaner workaround, you can run separate instances with "--node":
>>
>> ./john -se=1 -dev=1 -node=1-2/3 -format=something-opencl hash
>>
>> and concurrently:
>>
>> ./john -se=2 -dev=2 -node=3/3 -format=something-opencl hash
> 
> This looks interesting. I already switched to separate dictionaries
> for every gpu on each node for now. Tough with your syntax I noticed
> you do NOT use --fork so does that mean with -se it will not require
> -node=1-15/28 to run 15 forks for single gpu ? :)

-se is short-cut for --session, the two jobs are started in eg. separate 
terminals.

> In my case it wouldn't be so easy as the difference between those two
> gpus was something around 13 to 15

You can get more granularity: Split the job in many more "nodes" but 
think of them as work items. Then do something like:

./john -se=1 -dev=1 -node=1-13/28 -format=something-opencl hash

./john -se=2 -dev=2 -node=14-28/28 -format=something-opencl hash

This will give session 1 46% of the job, and session 2 the remaining 
54%. Note though that some of our modes will not manage to distribute 
the work with that great detail - we're probably asking for more 
granularity than we'll actually get. They shouldn't miss anything though.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.