[3.13.y-ckt stable] Patch "x86/cpu: Call verify_cpu() after having entered long mode too" has been added to staging queue
Kamal Mostafa
kamal at canonical.com
Wed Dec 2 22:53:44 UTC 2015
This is a note to let you know that I have just added a patch titled
x86/cpu: Call verify_cpu() after having entered long mode too
to the linux-3.13.y-queue branch of the 3.13.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.13.y-queue
This patch is scheduled to be released in version 3.13.11-ckt31.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.13.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
------
>From 8afa2d463b3f9a1bd9cfb3bde658ea7d58f5c88c Mon Sep 17 00:00:00 2001
From: Borislav Petkov <bp at suse.de>
Date: Thu, 5 Nov 2015 16:57:56 +0100
Subject: x86/cpu: Call verify_cpu() after having entered long mode too
commit 04633df0c43d710e5f696b06539c100898678235 upstream.
When we get loaded by a 64-bit bootloader, kernel entry point is
startup_64 in head_64.S. We don't trust any and all bootloaders because
some will fiddle with CPU configuration so we go ahead and massage each
CPU into sanity again.
For example, some dell BIOSes have this XD disable feature which set
IA32_MISC_ENABLE[34] and disable NX. This might be some dumb workaround
for other OSes but Linux sure doesn't need it.
A similar thing is present in the Surface 3 firmware - see
https://bugzilla.kernel.org/show_bug.cgi?id=106051 - which sets this bit
only on the BSP:
# rdmsr -a 0x1a0
400850089
850089
850089
850089
I know, right?!
There's not even an off switch in there.
So fix all those cases by sanitizing the 64-bit entry point too. For
that, make verify_cpu() callable in 64-bit mode also.
Requested-and-debugged-by: "H. Peter Anvin" <hpa at zytor.com>
Reported-and-tested-by: Bastien Nocera <bugzilla at hadess.net>
Signed-off-by: Borislav Petkov <bp at suse.de>
Cc: Matt Fleming <matt at codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz at infradead.org>
Link: http://lkml.kernel.org/r/1446739076-21303-1-git-send-email-bp@alien8.de
Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
arch/x86/kernel/head_64.S | 8 ++++++++
arch/x86/kernel/verify_cpu.S | 12 +++++++-----
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a2dc0ad..761fd69 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -65,6 +65,9 @@ startup_64:
* tables and then reload them.
*/
+ /* Sanitize CPU configuration */
+ call verify_cpu
+
/*
* Compute the delta between the address I am compiled to run at and the
* address I am actually running at.
@@ -174,6 +177,9 @@ ENTRY(secondary_startup_64)
* after the boot processor executes this code.
*/
+ /* Sanitize CPU configuration */
+ call verify_cpu
+
movq $(init_level4_pgt - __START_KERNEL_map), %rax
1:
@@ -288,6 +294,8 @@ ENTRY(secondary_startup_64)
pushq %rax # target address in negative space
lretq
+#include "verify_cpu.S"
+
#ifdef CONFIG_HOTPLUG_CPU
/*
* Boot CPU0 entry point. It's called from play_dead(). Everything has been set
diff --git a/arch/x86/kernel/verify_cpu.S b/arch/x86/kernel/verify_cpu.S
index b9242ba..4cf401f 100644
--- a/arch/x86/kernel/verify_cpu.S
+++ b/arch/x86/kernel/verify_cpu.S
@@ -34,10 +34,11 @@
#include <asm/msr-index.h>
verify_cpu:
- pushfl # Save caller passed flags
- pushl $0 # Kill any dangerous flags
- popfl
+ pushf # Save caller passed flags
+ push $0 # Kill any dangerous flags
+ popf
+#ifndef __x86_64__
pushfl # standard way to check for cpuid
popl %eax
movl %eax,%ebx
@@ -48,6 +49,7 @@ verify_cpu:
popl %eax
cmpl %eax,%ebx
jz verify_cpu_no_longmode # cpu has no cpuid
+#endif
movl $0x0,%eax # See if cpuid 1 is implemented
cpuid
@@ -130,10 +132,10 @@ verify_cpu_sse_test:
jmp verify_cpu_sse_test # try again
verify_cpu_no_longmode:
- popfl # Restore caller passed flags
+ popf # Restore caller passed flags
movl $1,%eax
ret
verify_cpu_sse_ok:
- popfl # Restore caller passed flags
+ popf # Restore caller passed flags
xorl %eax, %eax
ret
--
1.9.1
More information about the kernel-team
mailing list