[SRU][B][D][Patch 2/2] powerpc/tm: Fix restoring FP/VMX facility incorrectly on interrupts

frank.heimes at canonical.com frank.heimes at canonical.com
Wed Sep 11 13:58:37 UTC 2019

From: Gustavo Romero <gromero at linux.ibm.com>


BugLink: https://bugs.launchpad.net/bugs/1843533

When in userspace and MSR FP=0 the hardware FP state is unrelated to
the current process. This is extended for transactions where if tbegin
is run with FP=0, the hardware checkpoint FP state will also be
unrelated to the current process. Due to this, we need to ensure this
hardware checkpoint is updated with the correct state before we enable
FP for this process.

Unfortunately we get this wrong when returning to a process from a
hardware interrupt. A process that starts a transaction with FP=0 can
take an interrupt. When the kernel returns back to that process, we
change to FP=1 but with hardware checkpoint FP state not updated. If
this transaction is then rolled back, the FP registers now contain the
wrong state.

The process looks like this:
   Userspace:                      Kernel

               Start userspace
                with MSR FP=0 TM=1
                  < -----
               Hardware interrupt
                   ---- >
				        /* sees FP=0 */
					    /* sees FP=1 (Incorrect) */
                                        FP = 0 -> 1
                  < -----
               Return to userspace
                 with MSR TM=1 FP=1
                 with junk in the FP TM checkpoint
   TM rollback
   reads FP junk

When returning from the hardware exception, tm_active_with_fp() is
incorrectly making restore_fp() call load_fp_state() which is setting

The fix is to remove tm_active_with_fp().

tm_active_with_fp() is attempting to handle the case where FP state
has been changed inside a transaction. In this case the checkpointed
and transactional FP state is different and hence we must restore the
FP state (ie. we can't do lazy FP restore inside a transaction that's
used FP). It's safe to remove tm_active_with_fp() as this case is
handled by restore_tm_state(). restore_tm_state() detects if FP has
been using inside a transaction and will set load_fp and call
restore_math() to ensure the FP state (checkpoint and transaction) is

This is a data integrity problem for the current process as the FP
registers are corrupted. It's also a security problem as the FP
registers from one process may be leaked to another.

Similarly for VMX.

A simple testcase to replicate this will be posted to

This fixes CVE-2019-15031.

Fixes: a7771176b439 ("powerpc: Don't enable FP/Altivec if not checkpointed")
Cc: stable at vger.kernel.org # 4.15+
Signed-off-by: Gustavo Romero <gromero at linux.ibm.com>
Signed-off-by: Michael Neuling <mikey at neuling.org>
Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
Link: https://lore.kernel.org/r/20190904045529.23002-2-gromero@linux.vnet.ibm.com
(cherry picked from commit a8318c13e79badb92bc6640704a64cc022a6eb97)
Signed-off-by: Frank Heimes <frank.heimes at canonical.com>
 arch/powerpc/kernel/process.c | 23 ++---------------------
 1 file changed, 2 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 4538bf2..a64fc64 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -100,27 +100,9 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
-static inline bool msr_tm_active(unsigned long msr)
-	return MSR_TM_ACTIVE(msr);
-static bool tm_active_with_fp(struct task_struct *tsk)
-	return msr_tm_active(tsk->thread.regs->msr) &&
-		(tsk->thread.ckpt_regs.msr & MSR_FP);
-static bool tm_active_with_altivec(struct task_struct *tsk)
-	return msr_tm_active(tsk->thread.regs->msr) &&
-		(tsk->thread.ckpt_regs.msr & MSR_VEC);
 static inline bool msr_tm_active(unsigned long msr) { return false; }
 static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
-static inline bool tm_active_with_fp(struct task_struct *tsk) { return false; }
-static inline bool tm_active_with_altivec(struct task_struct *tsk) { return false; }
 bool strict_msr_control;
@@ -253,7 +235,7 @@ EXPORT_SYMBOL(enable_kernel_fp);
 static int restore_fp(struct task_struct *tsk)
-	if (tsk->thread.load_fp || tm_active_with_fp(tsk)) {
+	if (tsk->thread.load_fp) {
 		return 1;
@@ -334,8 +316,7 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
 static int restore_altivec(struct task_struct *tsk)
-	if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
-		(tsk->thread.load_vec || tm_active_with_altivec(tsk))) {
+	if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) {
 		tsk->thread.used_vr = 1;

More information about the kernel-team mailing list