[PATCH 4.2.y-ckt 028/268] arm64: mm: ensure that the zero page is visible to the page table walker

Kamal Mostafa kamal at canonical.com
Wed Jan 27 20:31:29 UTC 2016


4.2.8-ckt3 -stable review patch.  If anyone has any objections, please let me know.

---8<------------------------------------------------------------

From: Will Deacon <will.deacon at arm.com>

commit 32d6397805d00573ce1fa55f408ce2bca15b0ad3 upstream.

In paging_init, we allocate the zero page, memset it to zero and then
point TTBR0 to it in order to avoid speculative fetches through the
identity mapping.

In order to guarantee that the freshly zeroed page is indeed visible to
the page table walker, we need to execute a dsb instruction prior to
writing the TTBR.

Signed-off-by: Will Deacon <will.deacon at arm.com>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 arch/arm64/mm/mmu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6a0dc30..3d016b9 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -451,6 +451,9 @@ void __init paging_init(void)
 
 	empty_zero_page = virt_to_page(zero_page);
 
+	/* Ensure the zero page is visible to the page table walker */
+	dsb(ishst);
+
 	/*
 	 * TTBR0 is only used for the identity mapping at this stage. Make it
 	 * point to zero page to avoid speculatively fetching new entries.
-- 
1.9.1





More information about the kernel-team mailing list