Ack: Re: [maverick/ti-omap4 CVE 1/1] Sched: fix skip_clock_update optimization
Herton Ronaldo Krzesinski
herton.krzesinski at canonical.com
Tue Jan 3 19:33:45 UTC 2012
On Tue, Jan 03, 2012 at 07:14:19PM +0000, Andy Whitcroft wrote:
> From: Mike Galbraith <efault at gmx.de>
>
> commit f26f9aff6aaf67e9a430d16c266f91b13a5bff64 upstream.
>
> idle_balance() drops/retakes rq->lock, leaving the previous task
> vulnerable to set_tsk_need_resched(). Clear it after we return
> from balancing instead, and in setup_thread_stack() as well, so
> no successfully descheduled or never scheduled task has it set.
>
> Need resched confused the skip_clock_update logic, which assumes
> that the next call to update_rq_clock() will come nearly immediately
> after being set. Make the optimization robust against the waking
> a sleeper before it sucessfully deschedules case by checking that
> the current task has not been dequeued before setting the flag,
> since it is that useless clock update we're trying to save, and
> clear unconditionally in schedule() proper instead of conditionally
> in put_prev_task().
>
> Signed-off-by: Mike Galbraith <efault at gmx.de>
> Signed-off-by: Andi Kleen <ak at linux.intel.com>
> Reported-by: Bjoern B. Brandenburg <bbb.lst at gmail.com>
> Tested-by: Yong Zhang <yong.zhang0 at gmail.com>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
> LKML-Reference: <1291802742.1417.9.camel at marge.simson.net>
> Signed-off-by: Ingo Molnar <mingo at elte.hu>
> Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
> Signed-off-by: Brad Figg <brad.figg at canonical.com>
>
> CVE-2011-4621
> BugLink: http://bugs.launchpad.net/bugs/911401
> Signed-off-by: Andy Whitcroft <apw at canonical.com>
> ---
> kernel/fork.c | 1 +
> kernel/sched.c | 6 +++---
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 49ffdad..1819195 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -273,6 +273,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
>
> setup_thread_stack(tsk, orig);
> clear_user_return_notifier(tsk);
> + clear_tsk_need_resched(tsk);
> stackend = end_of_stack(tsk);
> *stackend = STACK_END_MAGIC; /* for overflow detection */
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 63b4a14..87e47d0 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -564,7 +564,7 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
> * A queue event has occurred, and we're going to schedule. In
> * this case, we can save a useless back to back clock update.
> */
> - if (test_tsk_need_resched(p))
> + if (rq->curr->se.on_rq && test_tsk_need_resched(rq->curr))
> rq->skip_clock_update = 1;
> }
>
> @@ -3536,7 +3536,6 @@ static void put_prev_task(struct rq *rq, struct task_struct *prev)
> {
> if (prev->se.on_rq)
> update_rq_clock(rq);
> - rq->skip_clock_update = 0;
> prev->sched_class->put_prev_task(rq, prev);
> }
>
> @@ -3599,7 +3598,6 @@ need_resched_nonpreemptible:
> hrtick_clear(rq);
>
> raw_spin_lock_irq(&rq->lock);
> - clear_tsk_need_resched(prev);
>
> if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
> if (unlikely(signal_pending_state(prev->state, prev)))
> @@ -3616,6 +3614,8 @@ need_resched_nonpreemptible:
>
> put_prev_task(rq, prev);
> next = pick_next_task(rq);
> + clear_tsk_need_resched(prev);
> + rq->skip_clock_update = 0;
>
> if (likely(prev != next)) {
> sched_info_switch(prev, next);
> --
> 1.7.5.4
>
>
> --
> kernel-team mailing list
> kernel-team at lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>
More information about the kernel-team
mailing list