Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 10 Sep 2012 05:24:14 +0400
From: Solar Designer <solar@...nwall.com>
To: Colin Percival <cperciva@...snap.com>
Cc: crypt-dev@...ts.openwall.com
Subject: Re: using scrypt for user authentication

On Sun, Sep 09, 2012 at 05:52:12PM -0700, Colin Percival wrote:
> In some ways -- available parallelism and RAM:compute die area ratio --
> GPUs fall somewhere between CPU and ASIC, but on "random access to small
> amounts of RAM" I'm not at all surprised to find that they are further
> in the "CPU" direction than CPUs themselves.

Yes, except that it's random access to cache line sized blocks of data,
unlike what we have e.g. in bcrypt.

> > (And if we go for much bigger
> > settings, this may imply a need to limit concurrency when scrypt is used
> > on authentication servers.)
> 
> What sort of authentication servers are you running where you only have
> 1 MB of RAM per CPU core?

Not per CPU core, but maybe per concurrent authentication attempt - if
concurrency is not limited.  If we simply introduce scrypt as a
supported crypt(3) flavor in an OS, then its memory usage needs to be
sane in the event of occasional spikes in concurrent authentication
attempts, including on rather small systems (e.g., 256 MB RAM VPSes).

A solution to this could be limiting concurrency - perhaps to the number
of CPU cores, as you suggest.  I mentioned some approaches to this here:

http://www.openwall.com/lists/crypt-dev/2011/05/12/4

I'd appreciate your comments on this.

For dedicated authentication servers at some specific organization, it
can be a custom interface, so limiting the concurrency is easier - and
much larger amounts of RAM may be used anyway.

> > We might also want to have a tradeoff-defeating variation of scrypt, as
> > Anthony Ferrara suggested or otherwise.  It appears that this would make
> > scrypt maybe a factor of 2 more resistant to attacks with current GPUs
> > at Litecoin's low settings and perhaps a lot more at bigger settings
> > (where the total GPU card's RAM size is more easily hit if the tradeoff
> > is not used in an attack).  Maybe this property should be configurable
> > (a Boolean parameter, to be encoded along with the resulting encrypted
> > data or password hash).
> 
> This seems like an attempt to patch over the problem of "scrypt not being
> used as intended".

I am not proposing this as an alternative to using larger memory sizes
(as intended).  I am proposing it for use along with scrypt's intended
settings.  Due to the tradeoff, GPUs may be potentially reasonably
usable for attacking scrypt even with large memory settings.  If a user
of scrypt (e.g., a sysadmin) doesn't expect to need to crack scrypt on
GPUs (and most won't), the user will be able to turn the knob and make
scrypt more GPU-unfriendly.  Note that since GPUs are only efficient
when a lot of parallelism is available, they're likely not reasonably
usable for defensive scrypt computation anyway (unless "p" is set very
high in advance).

Thank you for your helpful comments!

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.