[3.11.y.z extended stable] Patch "net: rds: fix per-cpu helper usage" has been added to staging queue

Luis Henriques luis.henriques at canonical.com
Tue Jan 21 11:03:16 UTC 2014


This is a note to let you know that I have just added a patch titled

    net: rds: fix per-cpu helper usage

to the linux-3.11.y-queue branch of the 3.11.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.11.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.11.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Luis

------

>From 9ca68ea31452f6a7f788a06a1623acc40efb01eb Mon Sep 17 00:00:00 2001
From: Gerald Schaefer <gerald.schaefer at de.ibm.com>
Date: Thu, 16 Jan 2014 16:54:48 +0100
Subject: net: rds: fix per-cpu helper usage

commit c196403b79aa241c3fefb3ee5bb328aa7c5cc860 upstream.

commit ae4b46e9d "net: rds: use this_cpu_* per-cpu helper" broke per-cpu
handling for rds. chpfirst is the result of __this_cpu_read(), so it is
an absolute pointer and not __percpu. Therefore, __this_cpu_write()
should not operate on chpfirst, but rather on cache->percpu->first, just
like __this_cpu_read() did before.

Signed-off-byd Gerald Schaefer <gerald.schaefer at de.ibm.com>

Signed-off-by: David S. Miller <davem at davemloft.net>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
 net/rds/ib_recv.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index 8eb9501..b7ebe23 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -421,8 +421,7 @@ static void rds_ib_recv_cache_put(struct list_head *new_item,
 				 struct rds_ib_refill_cache *cache)
 {
 	unsigned long flags;
-	struct list_head *old;
-	struct list_head __percpu *chpfirst;
+	struct list_head *old, *chpfirst;

 	local_irq_save(flags);

@@ -432,7 +431,7 @@ static void rds_ib_recv_cache_put(struct list_head *new_item,
 	else /* put on front */
 		list_add_tail(new_item, chpfirst);

-	__this_cpu_write(chpfirst, new_item);
+	__this_cpu_write(cache->percpu->first, new_item);
 	__this_cpu_inc(cache->percpu->count);

 	if (__this_cpu_read(cache->percpu->count) < RDS_IB_RECYCLE_BATCH_COUNT)
@@ -452,7 +451,7 @@ static void rds_ib_recv_cache_put(struct list_head *new_item,
 	} while (old);


-	__this_cpu_write(chpfirst, NULL);
+	__this_cpu_write(cache->percpu->first, NULL);
 	__this_cpu_write(cache->percpu->count, 0);
 end:
 	local_irq_restore(flags);
--
1.8.3.2





More information about the kernel-team mailing list