[3.13.y-ckt stable] Patch "ipc/sem.c: update/correct memory barriers" has been added to staging queue

Kamal Mostafa kamal at canonical.com
Tue Sep 22 00:32:47 UTC 2015


This is a note to let you know that I have just added a patch titled

    ipc/sem.c: update/correct memory barriers

to the linux-3.13.y-queue branch of the 3.13.y-ckt extended stable tree 
which can be found at:

    http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.13.y-queue

This patch is scheduled to be released in version 3.13.11-ckt27.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.13.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

------

>From da6c1a2fd0f407ec435a8da0a69c9c1564c5ded2 Mon Sep 17 00:00:00 2001
From: Manfred Spraul <manfred at colorfullife.com>
Date: Fri, 14 Aug 2015 15:35:10 -0700
Subject: ipc/sem.c: update/correct memory barriers

commit 3ed1f8a99d70ea1cd1508910eb107d0edcae5009 upstream.

sem_lock() did not properly pair memory barriers:

!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform read
operations before the lock test.

As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.

With regards to -stable:

The change of sem_wait_array() is a bugfix, the change to sem_lock() is a
nop (just a preprocessor redefinition to improve the readability).  The
bugfix is necessary for all kernels that use sem_wait_array() (i.e.:
starting from 3.10).

Signed-off-by: Manfred Spraul <manfred at colorfullife.com>
Reported-by: Oleg Nesterov <oleg at redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: "Paul E. McKenney" <paulmck at linux.vnet.ibm.com>
Cc: Kirill Tkhai <ktkhai at parallels.com>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: Josh Poimboeuf <jpoimboe at redhat.com>
Cc: Davidlohr Bueso <dave at stgolabs.net>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 ipc/sem.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 0a792d0..77febd7 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -253,6 +253,16 @@ static void sem_rcu_free(struct rcu_head *head)
 }

 /*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers.
+ * The code must pair with spin_unlock(&sem->lock) or
+ * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
+ *
+ * smp_rmb() is sufficient, as writes cannot pass the control barrier.
+ */
+#define ipc_smp_acquire__after_spin_is_unlocked()	smp_rmb()
+
+/*
  * Wait until all currently ongoing simple ops have completed.
  * Caller must own sem_perm.lock.
  * New simple ops cannot start, because simple ops first check
@@ -275,6 +285,7 @@ static void sem_wait_array(struct sem_array *sma)
 		sem = sma->sem_base + i;
 		spin_unlock_wait(&sem->lock);
 	}
+	ipc_smp_acquire__after_spin_is_unlocked();
 }

 /*
@@ -327,13 +338,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 		/* Then check that the global lock is free */
 		if (!spin_is_locked(&sma->sem_perm.lock)) {
 			/*
-			 * The ipc object lock check must be visible on all
-			 * cores before rechecking the complex count.  Otherwise
-			 * we can race with  another thread that does:
+			 * We need a memory barrier with acquire semantics,
+			 * otherwise we can race with another thread that does:
 			 *	complex_count++;
 			 *	spin_unlock(sem_perm.lock);
 			 */
-			smp_rmb();
+			ipc_smp_acquire__after_spin_is_unlocked();

 			/*
 			 * Now repeat the test of complex_count:
--
1.9.1





More information about the kernel-team mailing list