[3.19.y-ckt stable] Patch "mm: fix invalid node in alloc_migrate_target()" has been added to the 3.19.y-ckt tree

Kamal Mostafa kamal at canonical.com
Mon Apr 11 23:41:51 UTC 2016


This is a note to let you know that I have just added a patch titled

    mm: fix invalid node in alloc_migrate_target()

to the linux-3.19.y-queue branch of the 3.19.y-ckt extended stable tree 
which can be found at:

    http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.19.y-queue

This patch is scheduled to be released in version 3.19.8-ckt19.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.19.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

---8<------------------------------------------------------------

>From 279dec84e027672814ddedcd0e93a02e09c0728a Mon Sep 17 00:00:00 2001
From: Xishi Qiu <qiuxishi at huawei.com>
Date: Fri, 1 Apr 2016 14:31:20 -0700
Subject: mm: fix invalid node in alloc_migrate_target()

commit 6f25a14a7053b69917e2ebea0d31dd444cd31fd5 upstream.

It is incorrect to use next_node to find a target node, it will return
MAX_NUMNODES or invalid node.  This will lead to crash in buddy system
allocation.

Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Signed-off-by: Xishi Qiu <qiuxishi at huawei.com>
Acked-by: Vlastimil Babka <vbabka at suse.cz>
Acked-by: Naoya Horiguchi <n-horiguchi at ah.jp.nec.com>
Cc: Joonsoo Kim <js1304 at gmail.com>
Cc: David Rientjes <rientjes at google.com>
Cc: "Laura Abbott" <lauraa at codeaurora.org>
Cc: Hui Zhu <zhuhui at xiaomi.com>
Cc: Wang Xiaoqiang <wangxq10 at lzu.edu.cn>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 mm/page_isolation.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 755a42c..ce5e8bb 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -299,11 +299,11 @@ struct page *alloc_migrate_target(struct page *page, unsigned long private,
 	 * now as a simple work-around, we use the next node for destination.
 	 */
 	if (PageHuge(page)) {
-		nodemask_t src = nodemask_of_node(page_to_nid(page));
-		nodemask_t dst;
-		nodes_complement(dst, src);
+		int node = next_online_node(page_to_nid(page));
+		if (node == MAX_NUMNODES)
+			node = first_online_node;
 		return alloc_huge_page_node(page_hstate(compound_head(page)),
-					    next_node(page_to_nid(page), dst));
+					    node);
 	}

 	if (PageHighMem(page))
--
2.7.4





More information about the kernel-team mailing list