[ 3.5.yuz extended stable ] Patch "x86, mm: Trim memory in memblock to be page aligned" has been added to staging queue

Herton Ronaldo Krzesinski herton.krzesinski at canonical.com
Thu Nov 22 04:49:00 UTC 2012


This is a note to let you know that I have just added a patch titled

    x86, mm: Trim memory in memblock to be page aligned

to the linux-3.5.y-queue branch of the 3.5.yuz extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.yuz tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Herton

------

>From 6ebc8cf9b27f0894d8613ee9c533da12620a3946 Mon Sep 17 00:00:00 2001
From: Yinghai Lu <yinghai at kernel.org>
Date: Mon, 22 Oct 2012 16:35:18 -0700
Subject: [PATCH] x86, mm: Trim memory in memblock to be page aligned

commit 6ede1fd3cb404c0016de6ac529df46d561bd558b upstream.

We will not map partial pages, so need to make sure memblock
allocation will not allocate those bytes out.

Also we will use for_each_mem_pfn_range() to loop to map memory
range to keep them consistent.

Signed-off-by: Yinghai Lu <yinghai at kernel.org>
Link: http://lkml.kernel.org/r/CAE9FiQVZirvaBMFYRfXMmWEcHbKSicQEHz4VAwUv0xFCk51ZNw@mail.gmail.com
Acked-by: Jacob Shin <jacob.shin at amd.com>
Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski at canonical.com>
---
 arch/x86/kernel/e820.c   |    3 +++
 include/linux/memblock.h |    1 +
 mm/memblock.c            |   24 ++++++++++++++++++++++++
 3 files changed, 28 insertions(+)

diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 4185797..a42889d 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1077,6 +1077,9 @@ void __init memblock_x86_fill(void)
 		memblock_add(ei->addr, ei->size);
 	}

+	/* throw away partial pages */
+	memblock_trim_memory(PAGE_SIZE);
+
 	memblock_dump_all();
 }

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 19dc455..c948c44 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -57,6 +57,7 @@ int memblock_add(phys_addr_t base, phys_addr_t size);
 int memblock_remove(phys_addr_t base, phys_addr_t size);
 int memblock_free(phys_addr_t base, phys_addr_t size);
 int memblock_reserve(phys_addr_t base, phys_addr_t size);
+void memblock_trim_memory(phys_addr_t align);

 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
diff --git a/mm/memblock.c b/mm/memblock.c
index 5cc6731..900a97d 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -928,6 +928,30 @@ int __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t si
 	return memblock_overlaps_region(&memblock.reserved, base, size) >= 0;
 }

+void __init_memblock memblock_trim_memory(phys_addr_t align)
+{
+	int i;
+	phys_addr_t start, end, orig_start, orig_end;
+	struct memblock_type *mem = &memblock.memory;
+
+	for (i = 0; i < mem->cnt; i++) {
+		orig_start = mem->regions[i].base;
+		orig_end = mem->regions[i].base + mem->regions[i].size;
+		start = round_up(orig_start, align);
+		end = round_down(orig_end, align);
+
+		if (start == orig_start && end == orig_end)
+			continue;
+
+		if (start < end) {
+			mem->regions[i].base = start;
+			mem->regions[i].size = end - start;
+		} else {
+			memblock_remove_region(mem, i);
+			i--;
+		}
+	}
+}

 void __init_memblock memblock_set_current_limit(phys_addr_t limit)
 {
--
1.7.9.5





More information about the kernel-team mailing list