Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 05 Jan 2012 10:06:51 -0700
From: Kurt Seifried <kseifrie@...hat.com>
To: oss-security@...ts.openwall.com
CC: David Hicks <d@...id.au>
Subject: Re: speaking of DoS, openssh and dropbear (CVE-2006-1206)

On 01/05/2012 04:22 AM, David Hicks wrote:
> The question these approaches raise is whether it is advisable to
> reinvent rate limiting in each and every network daemon. Performing rate
> limiting at the system/interface level prevents unwanted and expensive
> context switches to each daemon. Configuration and maintenance is much
> simpler because administrators don't need to learn 50 different ways to
> configure rate limiting for each daemon. There is also less risk for
> bugs to be written into the rate limiting implementation of each daemon.

To a large degree yes, because each daemon is different and the daemon
also knows how bad things are getting, the firewall doesn't. Ideally the
daemons should be auto-tuning and degrading politely to prevent
dying/killing the system/etc. so the admin doesn't have to explicitly
learn how to tune it. THiss problem occurs with firewall limiting
anyways, how many connections per second per C class (or whatever) can I
safely allow to daemon X? What happens if we upgrade/downgrade the
server daemon X runs on? What happens if server X takes additional
duties or is modified in another way that affects the load it can
handle? I vote for daemons that auto-tune intelligently because I am
lazy =).

> On a technical note, rate limiting requires a small amount of memory
> (buckets) to store information about recent connections. For this
> reason, allowing IPv6 rate limiting granularity at the /128 level would
> be inadvisable as an attacker with /64 addresses could quickly exhaust
> the table capacity/available memory. The design of the data structures
> and algorithms for the table need to be very efficient. Taking it down
> another level, a table that is larger than available L1-L3 cache could
> further degrade performance ([4] and [5] discuss hash tables and CPU
> cache).
Again we could do something clever like auto-tune and start
consolidating buckets if the tables start getting too large.



-- 

-- Kurt Seifried / Red Hat Security Response Team

Powered by blists - more mailing lists

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.