Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 19 Oct 2015 15:06:28 -0500
From: Brad Knowles <>
Cc: Brad Knowles <>,
 CVE ID Requests <>
Subject: Re: Prime example of a can of worms

On Oct 18, 2015, at 11:06 PM, Kurt Seifried <> wrote:

> A small
> number of fixed or standardized groups are used by millions
> of servers; performing precomputation for a single 1024-bit
> group would allow passive eavesdropping on 18% of popular
> HTTPS sites, and a second group would allow decryption
> of traffic to 66% of IPsec VPNs and 26% of SSH servers.

I think this may be a bit of a slippery slope here.

How many machines would have to be vulnerable for a given group to be considered big enough to be “weak” and therefore worth of having a CVE issued?  Would that number be 1%?  5%?  10%?

At what point is it more dangerous to generate your own DH groups on systems that do not have sufficient uptime, versus re-using an existing DH group that might be considered “weak”?

There was a time when 1024-bit DH groups were considered sufficiently safe, and 2048-bit was overkill.  At what point does 2048-bit become “weak” in the same way that 1024-bit is today?  How many years in advance are we going to build into the system, so that we can have people “safely” transitioned off 2048-bit DH groups and onto whatever the next new thing is?

I mean, NIST is having a hard enough time getting people to stop using MD-5, much less SHA-1.  And if SHA-1 falls this year, how long before SHA-2 falls?

Brad Knowles <>
LinkedIn Profile: <>

Download attachment "signature.asc" of type "application/pgp-signature" (833 bytes)

Powered by blists - more mailing lists

Your e-mail address:

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Powered by Openwall GNU/*/Linux - Powered by OpenVZ