Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 8 Jan 2024 17:08:35 -0500
From: Rich Felker <dalias@...c.org>
To: Markus Wichmann <nullplan@....net>
Cc: musl@...ts.openwall.com, Patrick Rauscher <listen@...uscher.de>
Subject: Re: ENOFILE after reaching SEM_NSEMS_MAX

On Mon, Jan 08, 2024 at 06:16:49PM +0100, Markus Wichmann wrote:
> Am Mon, Jan 08, 2024 at 03:01:46PM +0100 schrieb Patrick Rauscher:
> > Hello everyone,
> >
> > in POSIX, the constant SEM_NSEMS_MAX is defined as 256 from some time. While
> > glibc nowadays ignores the limit and will yield -1 when asked for it [1], musl
> > currently will return ENOFILE when asking for the 256th semaphore.
> >
> > This hit me (and obviously another person) when using python multiprocessing
> > [2]: Jan H detected, that while allocating a large number of semaphores (or
> > objects using at least one semaphore) works on Debian, it fails on alpine.
> >
> > Thanks to psykose on the IRC the issue could be identified to the different
> > libc. To make this finding less ephemeral than on the IRC log, I leave a note
> > here. As sem_open works in the documented way, this is certainly no bug
> > report, rather a feature request if you will so.
> >
> > Due to my lack of C-knowledge I may only standby and marvel to further
> > discussion, but maybe someone can come up with ideas 🙂
> >
> > Thanks for your time,
> > Patrick
> >
> > 1: https://sourceware.org/legacy-ml/libc-help/2012-02/msg00003.html
> > 2: https://stackoverflow.com/questions/77679957/python-multiprocessing-no-file-descriptors-available-error-inside-docker-alpine
> >
> I should probably explain what the limit is used for at the moment.
> POSIX requires that all successful calls to sem_open() with the same
> name argument have to return the same pointer and increase a refcount.
> Practically, this is only possible by having a list of all file-backed
> semaphores. The code goes through the file mapping code normally, then
> at the end checks if the semaphore was mapped already, and if so unmaps
> the new copy and returns the old one.
> 
> With the limit in place, the memory for the semaphore map can be
> allocated once and never returned. Iterations on the map have a
> well-defined end, and there's a defined maximum run-time to all such
> loops. If the limit were no longer imposed, the semaphore map would have
> to become a data structure capable of growth, e.g. a list. But then
> you'd get all the negative effects of having a list vs. the array we
> currently have. Whatever data structure you choose, you basically always
> get worse performance than with a simple array.
> 
> For the python example, I would ask whether this pattern of creating
> named semaphores with random names is really all that wise. What is the
> point? You have to transmit the name to the destination process somehow.
> This is not really what named semaphores are for. Mayhaps a shared
> memory would suit you better? Or foregoing semaphores in favor of
> another primitive altogether?

Indeed, while I don't yet fully understand what they're trying to do,
named semaphores do not sound like a suitable tool here, where there
seems to be one named sem per synchronization primitive or something.

Named semaphores are *incredibly* inefficient. On a system with
reasonable normal page size of 4k, named sems have 25500% overhead vs
an anon sem. On a system with large 64k pages, that jumps to 409500%
overhead. Moreover, Linux imposes limits on the number of mmaps a
process has (default limit is 64k, after consolidation of adjacent
anon maps with same permissions), and each map also contributes to
open file limits, etc. Even just using the musl limit of 256 named
sems (which is all that POSIX guarantees you; requiring more is
nonportable), you're wasting 1M of memory on a 4k-page system (16M on
a 64k-page system) on storage for the semaphores, and probably that
much or more again for all of the inodes, vmas, and other kernel
bookkeeping. At thousands of named sems, it gets far worse.

I'm not familiar enough with the problem to know what the right thing
is, but if these are all related processes, they should probably be
allocating their semaphores as anon ones inside one or more
shared-memory maps. This would have far lower overhead, and would not
run into the limit hit here.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.