ACK: [SRU Jammy, OEM-5.17, Kinetic, OEM-6.0 PATCH 0/5] CVE-2023-0597

Tim Gardner tim.gardner at canonical.com
Thu Jun 8 15:55:22 UTC 2023


On 6/7/23 8:10 PM, Cengiz Can wrote:
> [Impact]
> A flaw possibility of memory leak in the Linux kernel cpu_entry_area mapping
> of X86 CPU data to memory was found in the way user can guess location of
> exception stack(s) or other important data. A local user could use this flaw
> to get access to some important data with expected location in memory.
> 
> [Fix]
> Although it was initially announced as a single patch, it turned out to be 5
> consecutive patches that depend on each other, indirectly.
> 
> Following is a prerequisite for the fix:
> 
> - 3f148f331814 ("x86/kasan: Map shadow for percpu pages on demand")
> 
> This is the actual fix, that needed a `#include <linux/random.h>` modification
> on anything below 6.0:
> 
> - 97e3d26b5e5f ("x86/mm: Randomize per-cpu entry area")
> 
> These are Fixes to the prerequisite:
> 
> - 80d72a8f76e8 ("x86/mm: Recompute physical address for every page of per-CPU CEA mapping")
> - 97650148a15e ("x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area")
> 
> This is a Fixes to the fix:
> 
> - a3f547addcaa ("x86/mm: Do not shuffle CPU entry areas without KASLR")
> 
> [Test case]
> Compile and boot tested, with and without `nokaslr` boot argument.
> 
> [Potential regression]
> Critical.
> This might prevent kernel from booting properly. It affects all users.
> 
> Andrey Ryabinin (1):
>    x86/kasan: Map shadow for percpu pages on demand
> 
> Michal Koutný (1):
>    x86/mm: Do not shuffle CPU entry areas without KASLR
> 
> Peter Zijlstra (1):
>    x86/mm: Randomize per-cpu entry area
> 
> Sean Christopherson (2):
>    x86/mm: Recompute physical address for every page of per-CPU CEA
>      mapping
>    x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry
>      area
> 
>   arch/x86/include/asm/cpu_entry_area.h |  4 --
>   arch/x86/include/asm/kasan.h          |  3 ++
>   arch/x86/include/asm/pgtable_areas.h  |  8 +++-
>   arch/x86/kernel/hw_breakpoint.c       |  2 +-
>   arch/x86/mm/cpu_entry_area.c          | 57 +++++++++++++++++++++++++--
>   arch/x86/mm/kasan_init_64.c           | 15 +++++--
>   6 files changed, 76 insertions(+), 13 deletions(-)
> 
Acked-by: Tim Gardner <tim.gardner at canonical.com>
-- 
-----------
Tim Gardner
Canonical, Inc




More information about the kernel-team mailing list