Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 16 Apr 2013 14:56:28 -0500
From: "jfoug" <jfoug@....net>
To: <john-dev@...ts.openwall.com>
Subject: RE: Segfaults probably caused by DEBUG code in memory.c (was: Segfault for linux-x86-native with -DDEBUG added)

From: magnum Sent: Tuesday, April 16, 2013 14:32
>Not the same thing. In the normal non-debug code we obviously must do it,
because we are re-using one large >memory block for many smaller blocks.
That is one of mem_alloc_tiny's two reasons to exist.

No, look at the very bottom of mem_alloc_tiny.  That code happens when a
huge block is requested (larger than MEM_ALLOC_SIZE).  In that case,
mem_alloc_tiny allocates a specific block JUST for this object, without
impacting the buffer object at all.

That code is really what you wanted to to in DEBUG mode, but doing it for
every block. 

In hindsight, in debug mode, you might have gotten the same results by doing
this:

#if DEBUG
#undef MEM_ALLOC_SIZE
#define MEM_ALLOC_SIZE 0
#endif

I think you would have gotten the same results.  NOTE I have not tested that
hypothesis at all.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.