Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 20 Jun 2017 17:41:35 +0300 (MSK)
From: Alexander Monakov <amonakov@...ras.ru>
To: musl@...ts.openwall.com
cc: Jens Gustedt <jens.gustedt@...ia.fr>
Subject: Re: [PATCH] (V2) a new lock algorithm with lock value and CS
 counts in the same atomic int

On Fri, 16 Jun 2017, Jens Gustedt wrote:
> --- a/src/thread/__lock.c
> +++ b/src/thread/__lock.c
> @@ -2,14 +2,46 @@
>  
>  void __lock(volatile int *l)
>  {
[...]
> +	// Spinning failed, so mark ourselves as being inside the CS.
> +	current = a_fetch_add(l, 1) + 1;
> +	/* The main lock acquisition loop for heavy congestion. The only
> +	   change to the value performed inside that loop is a successful
> +	   lock via the CAS that acquires the lock. */
> +	for (;;) {
> +		/* We can only go into wait, if we know that somebody holds the
> +		   lock and will eventually wake us up, again. */
> +		if (current < 0) {
> +			__futexwait(l, current, 1);

Here you reuse the value of 'current' after resuming after a futex-wait. But if
the lock is under contention, with high probability other threads have modified
the lock while we were sleeping, so the value of 'current' is likely stale, and
the following a_cas is pretty much guaranteed to fail. Wouldn't it be better to
reload current = *l after resuming?

Alexander

> +			current -= INT_MIN + 1;
> +		}
> +		/* assertion: current > 0, because the count
> +		   includes us already. */
> +		int val = a_cas(l, current, INT_MIN + current);
> +		if (val == current) return;
> +		current = val;
> +	}
>  }

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.