[3.16.y-ckt stable] Patch "powerpc/kernel: Avoid memory corruption at early stage" has been added to staging queue

Luis Henriques luis.henriques at canonical.com
Mon Mar 2 13:38:22 UTC 2015

This is a note to let you know that I have just added a patch titled

    powerpc/kernel: Avoid memory corruption at early stage

to the linux-3.16.y-queue branch of the 3.16.y-ckt extended stable tree 
which can be found at:


This patch is scheduled to be released in version 3.16.7-ckt8.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.16.y-ckt tree, see



>From 917de4717880e5ecc9f034d358d66c75a9a69a3c Mon Sep 17 00:00:00 2001
From: Gavin Shan <gwshan at linux.vnet.ibm.com>
Date: Thu, 8 Jan 2015 16:40:51 +1100
Subject: powerpc/kernel: Avoid memory corruption at early stage

commit 6f20e7f2e930211613a66d0603fa4abaaf3ce662 upstream.

When calling to early_setup(), we pick "boot_paca" up for the master CPU
and initialize that with initialise_paca(). At that point, the SLB
shadow buffer isn't populated yet. Updating the SLB shadow buffer should
corrupt what we had in physical address 0 where the trap instruction is
usually stored.

This hasn't been observed to cause any trouble in practice, but is
obviously fishy.

Fixes: 6f4441ef7009 ("powerpc: Dynamically allocate slb_shadow from memblock")
Signed-off-by: Gavin Shan <gwshan at linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
 arch/powerpc/kernel/paca.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index d6e195e8cd4c..5a23b69f8129 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -115,6 +115,14 @@ static struct slb_shadow * __init init_slb_shadow(int cpu)
 	struct slb_shadow *s = &slb_shadow[cpu];

+	/*
+	 * When we come through here to initialise boot_paca, the slb_shadow
+	 * buffers are not allocated yet. That's OK, we'll get one later in
+	 * boot, but make sure we don't corrupt memory at 0.
+	 */
+	if (!slb_shadow)
+		return NULL;
 	s->persistent = cpu_to_be32(SLB_NUM_BOLTED);
 	s->buffer_length = cpu_to_be32(sizeof(*s));

More information about the kernel-team mailing list