[ 3.5.y.z extended stable ] Patch "net: force a reload of first item in" has been added to staging queue

Luis Henriques luis.henriques at canonical.com
Mon Jun 24 08:19:17 UTC 2013


This is a note to let you know that I have just added a patch titled

    net: force a reload of first item in

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Luis

------

>From ab47a4791984fdcecb143fc08f2c52171291151d Mon Sep 17 00:00:00 2001
From: Eric Dumazet <eric.dumazet at gmail.com>
Date: Wed, 29 May 2013 09:06:27 +0000
Subject: [PATCH] net: force a reload of first item in
 hlist_nulls_for_each_entry_rcu

commit c87a124a5d5e8cf8e21c4363c3372bcaf53ea190 upstream.

Roman Gushchin discovered that udp4_lib_lookup2() was not reloading
first item in the rcu protected list, in case the loop was restarted.

This produced soft lockups as in https://lkml.org/lkml/2013/4/16/37

rcu_dereference(X)/ACCESS_ONCE(X) seem to not work as intended if X is
ptr->field :

In some cases, gcc caches the value or ptr->field in a register.

Use a barrier() to disallow such caching, as documented in
Documentation/atomic_ops.txt line 114

Thanks a lot to Roman for providing analysis and numerous patches.

Diagnosed-by: Roman Gushchin <klamm at yandex-team.ru>
Signed-off-by: Eric Dumazet <edumazet at google.com>
Reported-by: Boris Zhmurov <zhmurov at yandex-team.ru>
Signed-off-by: Roman Gushchin <klamm at yandex-team.ru>
Acked-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem at davemloft.net>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
 include/linux/rculist_nulls.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
index 2ae1371..1c33dd7 100644
--- a/include/linux/rculist_nulls.h
+++ b/include/linux/rculist_nulls.h
@@ -105,9 +105,14 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n,
  * @head:	the head for your list.
  * @member:	the name of the hlist_nulls_node within the struct.
  *
+ * The barrier() is needed to make sure compiler doesn't cache first element [1],
+ * as this loop can be restarted [2]
+ * [1] Documentation/atomic_ops.txt around line 114
+ * [2] Documentation/RCU/rculist_nulls.txt around line 146
  */
 #define hlist_nulls_for_each_entry_rcu(tpos, pos, head, member)			\
-	for (pos = rcu_dereference_raw(hlist_nulls_first_rcu(head));		\
+	for (({barrier();}),							\
+	     pos = rcu_dereference_raw(hlist_nulls_first_rcu(head));		\
 		(!is_a_nulls(pos)) &&						\
 		({ tpos = hlist_nulls_entry(pos, typeof(*tpos), member); 1; }); \
 		pos = rcu_dereference_raw(hlist_nulls_next_rcu(pos)))
--
1.8.1.2





More information about the kernel-team mailing list