Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [day] [month] [year] [list]
Date: Wed, 19 Jan 2022 07:19:49 +0000
From: "zhaohang (F)" <zhaohang14@...wei.com>
To: "musl@...ts.openwall.com" <musl@...ts.openwall.com>
CC: "zhangwentao (M)" <zhangwentao234@...wei.com>
Subject: Re: Re: What's the purpose of the __vm_lock?

I'll check the code to understand the data race conditions.

Thanks for your reply.

-----邮件原件-----
发件人: Rich Felker [mailto:dalias@...c.org] 
发送时间: 2022年1月13日 11:44
收件人: zhaohang (F) <zhaohang14@...wei.com>
抄送: musl@...ts.openwall.com; zhangwentao (M) <zhangwentao234@...wei.com>
主题: Re: [musl] What's the purpose of the __vm_lock?

On Thu, Jan 13, 2022 at 03:09:55AM +0000, zhaohang (F) wrote:
> Hello,
> 
> I'm a little confused about the usefulness of __vm_lock. It seems like 
> that __vm_lock was originally cited to prevent the data race condition 
> between pthread_barrier_wait and virtual memory changing, but it can 
> not ensure that the virtual memory of the barrier will not be changed 
> before pthread_barrier_wait. So, what is the meaning that introduce 
> the __vm_lock to prevent the data race?

In a couple places, it's necessary to hold a lock that prevents virtual addresses from being freed and possibly reused by something else. Aside from the pthread_barrier_wait thing (which might be buggy; there's a recent post to the list about that) the big user is process-shared robust mutexes:

The futex address is put on a pending slot in the robust list structure shared with the kernel while it's being unlocked. At that moment, it's possible for another thread to obtain the mutex, release it, destroy it, free the memory, and map some other shared memory in its place. If the process is killed at the same time, and the newly-mapped shared memory happens to contain the tid of the previous owner thread at that location, the kernel will clobber that as part of the exiting process cleanup. This could be a map of a file on disk, meaning it's a potential disk corruption bug.

This is fundamentally a design flaw in the whole way Linux robust mutex protocol works, but it's unfixable. So all we can do is take a very heavy lock to prevent the situation from arising.

Note that, for musl, this applies to non-robust process-shared mutexes too since we use the robust list for implementing permanently unobtainable state when the owner exits without unlocking the mutex.
(glibc doesn't do this because it ignores the POSIX requirement and lets a future thread that happens to get the same TID obtain false
ownership.)

The operations which are blocked by the vm lock are anything which unmaps memory (including MAP_FIXED over other memory), as well as the pthread_mutex_destroy operation (since it might allow the memory in a shared map to be reused for something else too).

Does this explanation help?

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.