[3.16.y-ckt stable] Patch "time: Avoid signed overflow in timekeeping_get_ns()" has been added to the 3.16.y-ckt tree

Luis Henriques luis.henriques at canonical.com
Wed Feb 3 13:57:45 UTC 2016


This is a note to let you know that I have just added a patch titled

    time: Avoid signed overflow in timekeeping_get_ns()

to the linux-3.16.y-queue branch of the 3.16.y-ckt extended stable tree 
which can be found at:

    http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.16.y-queue

This patch is scheduled to be released in version 3.16.7-ckt24.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.16.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Luis

---8<------------------------------------------------------------

>From 1b69e00de62060e7426702dd5799d88953b96757 Mon Sep 17 00:00:00 2001
From: David Gibson <david at gibson.dropbear.id.au>
Date: Mon, 30 Nov 2015 12:30:30 +1100
Subject: time: Avoid signed overflow in timekeeping_get_ns()

commit 35a4933a895927990772ae96fdcfd2f806929ee2 upstream.

1e75fa8 "time: Condense timekeeper.xtime into xtime_sec" replaced a call to
clocksource_cyc2ns() from timekeeping_get_ns() with an open-coded version
of the same logic to avoid keeping a semi-redundant struct timespec
in struct timekeeper.

However, the commit also introduced a subtle semantic change - where
clocksource_cyc2ns() uses purely unsigned math, the new version introduces
a signed temporary, meaning that if (delta * tk->mult) has a 63-bit
overflow the following shift will still give a negative result.  The
choice of 'maxsec' in __clocksource_updatefreq_scale() means this will
generally happen if there's a ~10 minute pause in examining the
clocksource.

This can be triggered on a powerpc KVM guest by stopping it from qemu for
a bit over 10 minutes.  After resuming time has jumped backwards several
minutes causing numerous problems (jiffies does not advance, msleep()s can
be extended by minutes..).  It doesn't happen on x86 KVM guests, because
the guest TSC is effectively frozen while the guest is stopped, which is
not the case for the powerpc timebase.

Obviously an unsigned (64 bit) overflow will only take twice as long as a
signed, 63-bit overflow.  I don't know the time code well enough to know
if that will still cause incorrect calculations, or if a 64-bit overflow
is avoided elsewhere.

Still, an incorrect forwards clock adjustment will cause less trouble than
time going backwards.  So, this patch removes the potential for
intermediate signed overflow.

Suggested-by: Laurent Vivier <lvivier at redhat.com>
Tested-by: Laurent Vivier <lvivier at redhat.com>
Signed-off-by: David Gibson <david at gibson.dropbear.id.au>
Signed-off-by: John Stultz <john.stultz at linaro.org>
[ luis: backported to 3.16: adjusted context ]
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
 kernel/time/timekeeping.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 32d8d6aaedb8..268428930c99 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -178,8 +178,7 @@ static inline s64 timekeeping_get_ns(struct timekeeper *tk)
 	/* calculate the delta since the last update_wall_time: */
 	cycle_delta = (cycle_now - clock->cycle_last) & clock->mask;

-	nsec = cycle_delta * tk->mult + tk->xtime_nsec;
-	nsec >>= tk->shift;
+	nsec = (cycle_delta * tk->mult + tk->xtime_nsec) >> tk->shift;

 	/* If arch requires, add in get_arch_timeoffset() */
 	return nsec + get_arch_timeoffset();




More information about the kernel-team mailing list