[SRU][F][PATCH v2 1/1] powerpc/kasan: Fix addr error caused by page alignment
Bethany Jamison
bethany.jamison at canonical.com
Tue Apr 23 20:47:16 UTC 2024
From: Jiangfeng Xiao <xiaojiangfeng at huawei.com>
In kasan_init_region, when k_start is not page aligned, at the begin of
for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
`va = block + k_cur - k_start` is less than block, the addr va is invalid,
because the memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved by memblock_reserve later, it
will be used by other places.
As a result, memory overwriting occurs.
for example:
int __init __weak kasan_init_region(void *start, size_t size)
{
[...]
/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
block = memblock_alloc(k_end - k_start, PAGE_SIZE);
[...]
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
/* at the begin of for loop
* block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
* va(dcd96c00) is less than block(dcd97000), va is invalid
*/
void *va = block + k_cur - k_start;
[...]
}
[...]
}
Therefore, page alignment is performed on k_start before
memblock_alloc() to ensure the validity of the VA address.
Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.")
Signed-off-by: Jiangfeng Xiao <xiaojiangfeng at huawei.com>
Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com
(backported from commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0)
[bjamison: context conflict - added k_start realignment to appropriate spot in code]
CVE-2024-26712
Signed-off-by: Bethany Jamison <bethany.jamison at canonical.com>
---
arch/powerpc/mm/kasan/kasan_init_32.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 3f78007a72822..84b0bd1b8ff3b 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -90,8 +90,10 @@ static int __ref kasan_init_region(void *start, size_t size)
if (ret)
return ret;
- if (!slab_is_available())
+ if (!slab_is_available()) {
+ k_start = k_start & PAGE_MASK;
block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+ }
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
--
2.34.1
More information about the kernel-team
mailing list