[ 3.5.y.z extended stable ] Patch "ARM: 7627/1: Predicate preempt logic on PREEMP_COUNT not" has been added to staging queue

Herton Ronaldo Krzesinski herton.krzesinski at canonical.com
Wed Jan 30 21:57:11 UTC 2013


This is a note to let you know that I have just added a patch titled

    ARM: 7627/1: Predicate preempt logic on PREEMP_COUNT not

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Herton

------

>From 920fa6b0e1f42b1e522e2114bc4bc2ce71a5a737 Mon Sep 17 00:00:00 2001
From: Stephen Boyd <sboyd at codeaurora.org>
Date: Mon, 14 Jan 2013 19:50:42 +0100
Subject: [PATCH] ARM: 7627/1: Predicate preempt logic on PREEMP_COUNT not
 PREEMPT alone

commit 568dca15aa2a0f4ddee255894ec393a159f13147 upstream.

Patrik Kluba reports that the preempt count becomes invalid due
to the preempt_enable() call being unbalanced with a
preempt_disable() call in the vfp assembly routines. This happens
because preempt_enable() and preempt_disable() update preempt
counts under PREEMPT_COUNT=y but the vfp assembly routines do so
under PREEMPT=y. In a configuration where PREEMPT=n and
DEBUG_ATOMIC_SLEEP=y, PREEMPT_COUNT=y and so the preempt_enable()
call in VFP_bounce() keeps subtracting from the preempt count
until it goes negative.

Fix this by always using PREEMPT_COUNT to decided when to update
preempt counts in the ARM assembly code.

Signed-off-by: Stephen Boyd <sboyd at codeaurora.org>
Reported-by: Patrik Kluba <pkluba at dension.com>
Tested-by: Patrik Kluba <pkluba at dension.com>
Signed-off-by: Russell King <rmk+kernel at arm.linux.org.uk>
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski at canonical.com>
---
 arch/arm/vfp/entry.S |    6 +++---
 arch/arm/vfp/vfphw.S |    4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
index cc926c9..323ce1a 100644
--- a/arch/arm/vfp/entry.S
+++ b/arch/arm/vfp/entry.S
@@ -22,7 +22,7 @@
 @  IRQs disabled.
 @
 ENTRY(do_vfp)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	ldr	r4, [r10, #TI_PREEMPT]	@ get preempt count
 	add	r11, r4, #1		@ increment it
 	str	r11, [r10, #TI_PREEMPT]
@@ -35,7 +35,7 @@ ENTRY(do_vfp)
 ENDPROC(do_vfp)

 ENTRY(vfp_null_entry)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info	r10
 	ldr	r4, [r10, #TI_PREEMPT]	@ get preempt count
 	sub	r11, r4, #1		@ decrement it
@@ -53,7 +53,7 @@ ENDPROC(vfp_null_entry)

 	__INIT
 ENTRY(vfp_testing_entry)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info	r10
 	ldr	r4, [r10, #TI_PREEMPT]	@ get preempt count
 	sub	r11, r4, #1		@ decrement it
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 3a0efaa..6ff903e 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -167,7 +167,7 @@ vfp_hw_state_valid:
 					@ else it's one 32-bit instruction, so
 					@ always subtract 4 from the following
 					@ instruction address.
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info	r10
 	ldr	r4, [r10, #TI_PREEMPT]	@ get preempt count
 	sub	r11, r4, #1		@ decrement it
@@ -191,7 +191,7 @@ look_for_VFP_exceptions:
 	@ not recognised by VFP

 	DBGSTR	"not VFP"
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info	r10
 	ldr	r4, [r10, #TI_PREEMPT]	@ get preempt count
 	sub	r11, r4, #1		@ decrement it
--
1.7.9.5





More information about the kernel-team mailing list