Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 23 Aug 2023 13:57:04 +0000
From: Mike Granby <mikeg@...eg.net>
To: Rich Felker <dalias@...c.org>
CC: "musl@...ts.openwall.com" <musl@...ts.openwall.com>
Subject: RE: sizeof(pthread_mutex_t) on aarch64

Yeah, that all makes sense. I suspected that would be the case, but I figured I'd memorialize the issue in case someone else came across it. This way, they might at least find it via a search. Just for my interest, I'm going to have a quick look to figure out why glibc ends up with the different size on aarch64: It seems like it is the outlie, although my sample size is not large. Re gcompat, that didn't fix this particular issue and in any case, I really didn't like what it did to cmdline in procfs with its shimming, but I'll certainly keep an eye on its development. Thanks for the prompt response...

-----Original Message-----
From: Rich Felker <dalias@...c.org> 
Sent: Wednesday, August 23, 2023 9:30 AM
To: Mike Granby <mikeg@...eg.net>
Cc: musl@...ts.openwall.com
Subject: Re: [musl] sizeof(pthread_mutex_t) on aarch64

On Wed, Aug 23, 2023 at 02:09:35AM +0000, Mike Granby wrote:
> I've been running glibc-compiled programs on Alpine and thus on musl 
> with a high level of success, but I just hit the wall when working on 
> a Raspberry PI running aarch64 Alpine. I tracked the issue down to a 
> difference in the size of pthread_mutex_t. It appears that musl uses 6 
> ints for platforms with 32-bit longs, and 10 ints for those with 
> 64-bit longs, and this seems to match glibc on all of the platforms 
> I've played with to date. But on aarch64, it appears that glibc is 
> using 48 bytes rather than 40 bytes that musl expects. This doesn't 
> actually cause an issue in many cases as the application just 
> allocates too much space, but if you're using inlining and 
> std::ofstream, for example, you end up with the inline code in the 
> application having a different layout for the file buffer object 
> compared to that implemented on the target platform. Now, perhaps the 
> answer is just, well, stop expecting to run glibc code on musl, but 
> given that aarch64 seems to be an outlier in my experience to date, I 
> thought I'd mention it.

I guess that's interesting to know, but not something actionable. It's not like we can change and break ABI for the sake of glibc-ABI-compat, nor like we'd want to make every program use even more excess memory space for mutexes than they're already using. At some point the decision was made, based on our existing practice at the time and possibly loosely on glibc doing the same, to have the pthread types be arch-independent and only depend on wordsize, and this determined the ABI for all archs added later, without the types getting evaluated for glibc-ABI-compat since they were no longer arch types.

FWIW I think the program would likely "work" with a glibc version of
libstdc++. However, there are a lot of other places the ABI-compat
breaks down, like mismatching stat structs, ipc structs, etc. The future roadmap is for glibc-ABI-compat handling to be shifted out of libc to the gcompat package, which could do some sort of shims (and which musl ldso would assist it in shimming by letting a delegated library interpose on modules that have libc.so.6 dep) and perhaps make something like this work..

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.