[3.13.y-ckt stable] Patch "net: do not deplete pfmemalloc reserve" has been added to staging queue

Kamal Mostafa kamal at canonical.com
Tue May 5 20:33:50 UTC 2015


This is a note to let you know that I have just added a patch titled

    net: do not deplete pfmemalloc reserve

to the linux-3.13.y-queue branch of the 3.13.y-ckt extended stable tree 
which can be found at:

    http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.13.y-queue

This patch is scheduled to be released in version 3.13.11-ckt20.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.13.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

------

>From 9e9c24cd213bb18c9c52dce355ea1dcbc04907e8 Mon Sep 17 00:00:00 2001
From: Eric Dumazet <edumazet at google.com>
Date: Wed, 22 Apr 2015 07:33:36 -0700
Subject: net: do not deplete pfmemalloc reserve

[ Upstream commit 79930f5892e134c6da1254389577fffb8bd72c66 ]

build_skb() should look at the page pfmemalloc status.
If set, this means page allocator allocated this page in the
expectation it would help to free other pages. Networking
stack can do that only if skb->pfmemalloc is also set.

Also, we must refrain using high order pages from the pfmemalloc
reserve, so __page_frag_refill() must also use __GFP_NOMEMALLOC for
them. Under memory pressure, using order-0 pages is probably the best
strategy.

Signed-off-by: Eric Dumazet <edumazet at google.com>
Signed-off-by: David S. Miller <davem at davemloft.net>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 net/core/skbuff.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6e42045..0885955 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -307,7 +307,11 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)

 	memset(skb, 0, offsetof(struct sk_buff, tail));
 	skb->truesize = SKB_TRUESIZE(size);
-	skb->head_frag = frag_size != 0;
+	if (frag_size) {
+		skb->head_frag = 1;
+		if (virt_to_head_page(data)->pfmemalloc)
+			skb->pfmemalloc = 1;
+	}
 	atomic_set(&skb->users, 1);
 	skb->head = data;
 	skb->data = data;
@@ -350,7 +354,8 @@ refill:
 			gfp_t gfp = gfp_mask;

 			if (order)
-				gfp |= __GFP_COMP | __GFP_NOWARN;
+				gfp |= __GFP_COMP | __GFP_NOWARN |
+				       __GFP_NOMEMALLOC;
 			nc->frag.page = alloc_pages(gfp, order);
 			if (likely(nc->frag.page))
 				break;
--
1.9.1





More information about the kernel-team mailing list