[PATCH 3.19.y-ckt 59/86] x86/nmi/64: Fix a paravirt stack-clobbering bug in the NMI code

Kamal Mostafa kamal at canonical.com
Tue Oct 27 21:29:59 UTC 2015


3.19.8-ckt9 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Andy Lutomirski <luto at kernel.org>

commit 83c133cf11fb0e68a51681447e372489f052d40e upstream.

The NMI entry code that switches to the normal kernel stack needs to
be very careful not to clobber any extra stack slots on the NMI
stack.  The code is fine under the assumption that SWAPGS is just a
normal instruction, but that assumption isn't really true.  Use
SWAPGS_UNSAFE_STACK instead.

This is part of a fix for some random crashes that Sasha saw.

Fixes: 9b6e6a8334d5 ("x86/nmi/64: Switch stacks on userspace NMI entry")
Reported-and-tested-by: Sasha Levin <sasha.levin at oracle.com>
Signed-off-by: Andy Lutomirski <luto at kernel.org>
Link: http://lkml.kernel.org/r/974bc40edffdb5c2950a5c4977f821a446b76178.1442791737.git.luto@kernel.org
Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
[ luis: backported to 3.16:
  - file rename: arch/x86/entry/entry_64.S -> arch/x86/kernel/entry_64.S
  - adjusted context ]
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 arch/x86/kernel/entry_64.S | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index f8f94d4..9072010 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1504,9 +1504,12 @@ ENTRY(nmi)
 	 * we don't want to enable interrupts, because then we'll end
 	 * up in an awkward situation in which IRQs are on but NMIs
 	 * are off.
+	 *
+	 * We also must not push anything to the stack before switching
+	 * stacks lest we corrupt the "NMI executing" variable.
 	 */
 
-	SWAPGS
+	SWAPGS_UNSAFE_STACK
 	cld
 	movq	%rsp, %rdx
 	movq	PER_CPU_VAR(kernel_stack), %rsp
-- 
1.9.1





More information about the kernel-team mailing list