Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 30 Oct 2017 16:44:42 -0400
From: Arnold Reinhold <>
Subject: Re: Real world password policies

> On Oct 29, 2017, at 6:41 PM, Royce Williams <> wrote:
> On Sun, Oct 29, 2017 at 3:32 PM, Solar Designer < <>> wrote:
> That's my tentative plan for passwdqc 2.0.  Not so much for processing
> and shipping a password cracker output, although that can be done, but
> rather to take advantage of the recent (and not so recent) availability
> of large password leaks, widespread acceptance of their use for the
> purpose, and to be consistent with the new NIST guidelines.
> The 320 million SHA-1 hashes of leaked passwords (without even
> re-cracking them first, although folks did that) from
> <> would comfortably fit in an
> 8 GiB Bloom filter with negligible false positive rate and nearly
> instant checks.
> I'm pretty sure that the new NIST guidance was intended to encompass blacklists on the order of Dropbox's zxcvbn or similar, not a blacklist of such colossal size as the HIBP 320M. 
> From a usability standpoint, and taken in isolation, such a blacklist is completely untenable. It would result in a never-ending "gotcha" guessing game of "nope, not that one" for the user.
> And even if NIST did intend this, such guidance would be ... misguided. Fully 80% of the HIBP 320M list can be eliminated entirely by simply requiring passwords of a length greater than 12. Such a limit would also eliminate millions of other equally poor future passwords - passwords that would otherwise eventually find their way into such a monstrous, ever-growing blacklist.
> Now, if the plans for passwdqc 2.0 could include working that length restriction into its parameters ... that might be an interesting compromise, and dramatically decrease the size of the required blacklist, and still remaining NIST-"compatible".

As usual, we should consider what threat model we are addressing. If the threat is someone attempting a front door login with guessed passwords on a site that throttles bad login attempts per NIST, a blacklist of the most common passwords, e.g. 123456, etc., plus the users login, first name and the like, is pretty much all that is needed. If the threat is an attack on a stolen password SHA-X hash, 320 million entries is likely to few (though there is no need to test against hashes that have not been broken because, well, they have not been broken.) Cracking programs can test billions of password guesses a second on affordable hardware. 

I agree that requiring a larger minimum character length (e.g. 12) for user generated password would help and should be an option. Other options might include having the system suggest a random password, perhaps in a few formats and letting the user pick one. Id also like to see a standard for sites to disclose their password storage practice (salt or no, hash algorithm, resource consumption parameters, allowed character set, case sensitivity, any modifications modifications to submitted passwords, etc.) This could be tied into the password strength meter.

What is really untenable is the continued use of fast, GPU-friendly hashes to generate stored password validation data.  I think a password strength meter based on Bloom filters generated by simulated attacks on various hashes/KDFs would make that clear, and put password management on a sounder engineering footing.

Arnold Reinhold

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.