Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 30 Apr 2013 10:35:16 +0200
From: Szabolcs Nagy <nsz@...t70.net>
To: musl@...ts.openwall.com
Subject: Re: High-priority library replacements?

* Gregor Pintar <grpintar@...il.com> [2013-04-30 08:32:26 +0200]:
> My idea was that program would be correct, if it inputs too much data
> to hash function. It is very cheap to implement in most algorithms
> (detect counter overflow). Otherwise program has to count it himself.

i dont think the program has to count

eg in case of sha1 if you know that the throughput is less than
10gbps then it takes more than 50years to overflow

in theory there might be use-cases where the overflow could occure
in which case reporting error makes sense, but it seems to me that
can be avoided by the proper choice of algorithm or reasonable
application design

of course if you allow configurability of the block size and rounds
etc then the overflow becomes a practical concern so the error is
justified, which is why i said that flexibility is not necessarily
a good thing (general interface requires general error handling)

> However, in most cases it is too late to handle correctly when it
> already fails. Not returning error on too much input or output could
> silently cause incorrect usage. Using assert might be better idea
> since at that point it is fatal anyway, but that would cause data
> leak, unless program would catch SIGABRT (it should others like SIGINT
> anyway) and wipe data, but that requires all sensitive data in program
> is global.

i'd let the user do the bad thing (silently overflow)
if it matters it can be designed around in the application

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.