Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 12 Jan 2021 11:02:51 -0500
From: "David A. Wheeler" <dwheeler@...eeler.com>
To: oss-security@...ts.openwall.com
Subject: Re: CVE-2021-20177 kernel: iptables string match rule
 could result in kernel panic


>> On 12 Jan 2021, at 08:04, Greg KH <greg@...ah.com> wrote:
>> 
>> I still do not understand why you report issues that are fixed over a
>> year ago (October 2019) and assign them a CVE like this.  Who does this
>> help out? ...
> 
> On Jan 12, 2021, at 10:23 AM, John Haxby <john.haxby@...cle.com> wrote:
> 
> I think I can answer that.   There's nothing technical going on here, it's down to the behaviour of the end users of enterprise systems.
> 
> A lot of those people have a hard time understanding that they do actually want bug fixes and an even harder time understanding that they need to actually do something to install those fixes.   (I was once asked if I could fix a problem without changing anything, anything at all when the fix was a one-off chmod.)   A CVE number gets attention: think of it as getting hold of the customer by the lapels and going nose-to-nose to explain in words of one syllable they if they don't update their systems that they will crash and they will get hacked.
> 
> Ooh, no, they say, we can't possibly take the risk of updating our systems.  Suppose something goes wrong?   Sheesh.   Suppose, instead, someone comes along and sees a known, fixed bug is unfixed and uses that to trash your systems.    Or that you've got a bug that crashes the machine once a week for which there's a fix.   But, no, apparently the mythical risk of a tested update vs the actual quantifiable risk of leaving the bug unfixed is so great that they'd rather take the real, quantifiable risk.   I suppose that's understandable, after a fashion, even though actual regressions are quite rare.

I suspect in many cases there’s a simple answer: who takes the *blame* when something goes wrong?

If someone updates a component when “they don’t have to”, and it causes a problem, that person takes the fall: gets demoted, fired, whatever. If a component is not updated, and the system is attacked, the *attacker** is blamed & the admins don’t get demoted, fired, whatever. So updates are rare & involve >1 year testing to ensure that the blame is fully distributed away from any one person.

Some organizations make an explicit exception: if there’s a CVE, then you *are* “required” to update the component by policy. Then those who updated the component are no longer at serious career risk, because when someone tries to blame the person who did the update, they can say “I was required to update by policy”.

In short, I think it’s all about incentives.

--- David A. Wheeler


Powered by blists - more mailing lists

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.