Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Sat, 29 Feb 2020 16:39:48 +0100
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: PCI express

On Sat, Feb 29, 2020 at 10:16:44AM -0500, Powen Cheng wrote:
> Do you know or anyone know if tezos2john.py
> <https://github.com/magnumripper/JohnTheRipper/blob/bleeding-jumbo/run/tezos2john.py>
> this type of cracking, Will the speed of the pci express matter much?

This is a slow (non-)hash, and won't be impacted by PCIe bandwidth much.

However, our implementation of the tezos-opencl format also makes
significant use of CPU, so if you're looking into connecting lots of
GPUs e.g. via flexible PCIe extenders to a cheap motherboard/CPU then
the CPU will become the bottleneck.  IIRC, we already saw this with
cloud instances with 8x Tesla V100, where the corresponding fast dual
CPUs (IIRC, with a total of 96 logical CPUs) were not quite fast enough
to fully use these very fast GPUs.  Regardless, one trick to hide the
latency (only latency, not low performance) of both CPU computation and
PCIe transfer is to run two processes per GPU - that is, use "--fork=N"
with N being twice the number of GPUs.  This will only work well if the
number of logical CPUs much greater than the number of GPUs.

Realistically, you won't reasonably use more than a few GPUs per machine
with our current tezos-opencl format.  As I mention, 8 very fast GPUs is
already too many even for the fastest CPUs.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.