Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 10 Sep 2012 13:29:35 +0400
From: Solar Designer <solar@...nwall.com>
To: Colin Percival <cperciva@...snap.com>
Cc: crypt-dev@...ts.openwall.com
Subject: Re: using scrypt for user authentication

On Mon, Sep 10, 2012 at 12:46:54AM -0700, Colin Percival wrote:
> If you're seeing enough concurrent authentication attempts that using
> 16 MB for each of them comes close to eating up your 256 MB of RAM, odds
> are that they're simply never going to finish due to CPU utilization
> alone...

This depends on duration of the spike in concurrent authentication
attempts.  For example, at 1 MB, if 50 concurrent attempts arrive, but
are not followed by any more for at least a few seconds, the system will
survive with no long-term impact on other running services.  At 16 MB,
it will temporarily fully run out of memory, so other services may be
impacted (requiring restart).

I admit this may be a contrived example and possibly not the most
typical case, yet it is realistic.  Those many virtualized systems with
relatively small memory limits do fail on out-of-memory conditions once
in a while.  Replacing e.g. bcrypt with scrypt at 16 MB will increase
the frequency of such failures.

Anyhow, even if you convince me that scrypt at 16 MB is OK for crypt(3),
we'd have a hard time convincing others of it.  Besides, I'd consider
using more than 16 MB if we limit concurrency.

> The case which interests me more is the large websites which make a habit
> of leaking millions of hashed passwords, and I would expect them to set
> up internal scrypt-computation services.

Yes, this interests me as well.  With proper concurrency limits, we
could even use gigabytes of RAM per scrypt instance there.  However,
then it becomes challenging to keep the runtime per password hashed
sane.  Considering memory bandwidth, we might need to stay at up to
1 GB per instance currently - or less if we don't let one hash
computation fully use the memory bandwidth, to let other CPU cores
use the same memory bus in parallel.  That's a pity.

> >>> We might also want to have a tradeoff-defeating variation of scrypt
[...]
> > I am not proposing this as an alternative to using larger memory sizes
> > (as intended).  I am proposing it for use along with scrypt's intended
> > settings.  Due to the tradeoff, GPUs may be potentially reasonably
> > usable for attacking scrypt even with large memory settings.  If a user
> > of scrypt (e.g., a sysadmin) doesn't expect to need to crack scrypt on
> > GPUs (and most won't), the user will be able to turn the knob and make
> > scrypt more GPU-unfriendly.  Note that since GPUs are only efficient
> > when a lot of parallelism is available, they're likely not reasonably
> > usable for defensive scrypt computation anyway (unless "p" is set very
> > high in advance).
> 
> Point taken... although one supposes that a GPU might be a solution to your
> hypothesized denial-of-service attack problem. :-)

GPU cards are actually within consideration for "large websites" like
you mentioned, specifically for higher memory bandwidth if we're talking
scrypt - but there are _many_ challenges in using them like that.

Thanks,

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.