ACK: [SRU][Artful][PATCH 1/1] mm/pagewalk.c: report holes in hugetlb ranges
Colin Ian King
colin.king at canonical.com
Wed Jan 31 17:28:16 UTC 2018
On 31/01/18 17:01, Kleber Sacilotto de Souza wrote:
> From: Jann Horn <jannh at google.com>
>
> This matters at least for the mincore syscall, which will otherwise copy
> uninitialized memory from the page allocator to userspace. It is
> probably also a correctness error for /proc/$pid/pagemap, but I haven't
> tested that.
>
> Removing the `walk->hugetlb_entry` condition in walk_hugetlb_range() has
> no effect because the caller already checks for that.
>
> This only reports holes in hugetlb ranges to callers who have specified
> a hugetlb_entry callback.
>
> This issue was found using an AFL-based fuzzer.
>
> v2:
> - don't crash on ->pte_hole==NULL (Andrew Morton)
> - add Cc stable (Andrew Morton)
>
> Fixes: 1e25a271c8ac ("mincore: apply page table walker on do_mincore()")
> Signed-off-by: Jann Horn <jannh at google.com>
> Cc: <stable at vger.kernel.org>
> Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
>
> CVE-2017-16994
> (cherry picked from commit 373c4557d2aa362702c4c2d41288fb1e54990b7c)
> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza at canonical.com>
> ---
> mm/pagewalk.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 1a4197965415..7d973f63088c 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -187,8 +187,12 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
> do {
> next = hugetlb_entry_end(h, addr, end);
> pte = huge_pte_offset(walk->mm, addr & hmask, sz);
> - if (pte && walk->hugetlb_entry)
> +
> + if (pte)
> err = walk->hugetlb_entry(pte, hmask, addr, next, walk);
> + else if (walk->pte_hole)
> + err = walk->pte_hole(addr, next, walk);
> +
> if (err)
> break;
> } while (addr = next, addr != end);
>
Clean cherry pick. Looks OK to me.
Acked-by: Colin Ian King <colin.king at canonical.com>
More information about the kernel-team
mailing list