Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 28 Mar 2023 00:42:03 +0200
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: John the Ripper in the cloud update 2023/02

On Thu, Mar 23, 2023 at 05:18:53PM -0300, Rodrigo s wrote:
> But now I did my homework. And discover an ugly truth! AWS is... DIFFICULT!
> Man... I had to understand all the types of the server and this is not easy
> information. This link show it all
> <https://aws.amazon.com/ec2/instance-types/?nc1=h_ls>and what we really
> want is a *g4dn.16xlarge* or the supreme *g4dn.metal*. But you will need to
> request access to it and this is not an easy task. It is so obscure what I
> have to do that I give up!

Yes, this is not trivial, and "what we really want" varies by use case.
We list a few recommended instance types to consider on John the Ripper
in the cloud homepage.

> Why not use something easier? Just pay, rent a server, click and start to
> do the attack.

There's definitely room for improvement compared to JtR on AWS.

> So I discovery the https://vast.ai/
> 
> And this is really what I was thinking in the start. And I think you should
> migrate to this project. All you need to do is create a docker in
> https://hub.docker.com!
> 
> I don't know how to do this, but if you could install the best drivers for
> the gpu in the site and when I start the docker, it automatically has the
> best configuration, Man... life will be easy! I just pay, rent, start,
> connect ssh, type john and recover the password!
> 
> Do you like this idea?

Not really.

Our listing on AWS Marketplace was largely an experiment - can we get a
product like this listed there at all and would companies using JtR on
AWS want to support our project in this way?  Now we know the answers
are, respectively, yes and no.  If it were both yes, we would more
likely proceed to add support for the FPGAs that AWS also provides.

With a half-failed experiment, we're not eager to put a lot more effort
into it (or alternatives like it), but we do see it as still useful
enough to keep around - and even to occasionally update, as you can see.

The primary advantage of vast.ai over AWS is that vast.ai is a few times
cheaper.  However, then there are drawbacks:

While JtR can use GPUs, it still has many more "formats" supported on
CPU than on GPU.  AWS provides some of the fastest CPUs, without you
also having to pay for GPUs.  vast.ai is GPU-only (of course, you also
get some CPUs, but it'd be silly to pay for and not use GPUs).  I'm not
currently aware of a marketplace similar to vast.ai, but without
mandatory GPUs (or without GPUs at all) - if anyone in here is aware of
one, please reply.

AWS allows for monetization of the product listing.  I think vast.ai
does not?  While we're mostly into Open Source and free software (and
not only in terms of freedoms, but also free as in free beer), our
volunteer time is best spent on improving John the Ripper itself (which
is generally used for free and that's great) rather than on listing it
with more cloud services (which would benefit them and not so much our
project).  The number of users of the software itself is likely many
orders of magnitude higher than the number of its users on any given
cloud service.

AWS provides a certain level of security through virtualization,
hardware customizations for security, etc.  There's one company behind
it, with its legal agreements and reputation.  On vast.ai, it's just
Docker containers, which means a shared Linux kernel.  The machines are
hosted by many independent providers.  Now, whether system security
matters for you varies by your use case.  For some of what we support in
JtR, and for some use cases of it, vast.ai is just fine.  For some other
uses, it is not.  (For some, even AWS wouldn't be.  But I'd say vast.ai
is a higher risk.)

AWS provides standardized instance types.  On vast.ai, it is different
providers' machines, which vary.  So performance will also vary somewhat
more than it does on AWS.  (Some variance is always observed e.g. due to
different ambient temperature in the datacenter, but when the machines
themselves also have at least very different cooling, it becomes more.)

You write "you could install the best drivers for the gpu", but actually
on vast.ai the host system should readily have the drivers.  This has
both pros and cons, but it also adds to the variance - including in
terms of which JtR formats would maybe fail tests on an unlucky driver.

AWS provides EBS storage that can persist over stopping or terminating
an instance, and can be accessed from a new instance running on a
different machine.  On vast.ai, if a machine is taken offline, your data
is not available until the machine is back (if you're lucky).

In terms of pricing, vast.ai is also not the cheapest for long-term use.
You can find 2x+ cheaper non-cloud offers for 1 month and on.  So one
idea would be to partner with a specific GPU hosting provider (and these
also have CPU-only machines), but we're simply not seeing enough demand
to bother.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.