ACK: [SRU Hirsute/Impish 1/1] hugetlbfs: flush TLBs correctly after huge_pmd_unshare

Kleber Souza kleber.souza at canonical.com
Fri Nov 26 09:24:22 UTC 2021


On 26.11.21 02:02, Thadeu Lima de Souza Cascardo wrote:
> From: Nadav Amit <namit at vmware.com>
>
> When __unmap_hugepage_range() calls to huge_pmd_unshare() succeed, a TLB
> flush is missing.  This TLB flush must be performed before releasing the
> i_mmap_rwsem, in order to prevent an unshared PMDs page from being
> released and reused before the TLB flush took place.
>
> Arguably, a comprehensive solution would use mmu_gather interface to
> batch the TLB flushes and the PMDs page release, however it is not an
> easy solution: (1) try_to_unmap_one() and try_to_migrate_one() also call
> huge_pmd_unshare() and they cannot use the mmu_gather interface; and (2)
> deferring the release of the page reference for the PMDs page until
> after i_mmap_rwsem is dropeed can confuse huge_pmd_unshare() into
> thinking PMDs are shared when they are not.
>
> Fix __unmap_hugepage_range() by adding the missing TLB flush, and
> forcing a flush when unshare is successful.
>
> Fixes: 24669e58477e ("hugetlb: use mmu_gather instead of a temporary linked list for accumulating pages)" # 3.6
> Signed-off-by: Nadav Amit <namit at vmware.com>
> Reviewed-by: Mike Kravetz <mike.kravetz at oracle.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com>
> Cc: Andrew Morton <akpm at linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
> (cherry picked from commit a4a118f2eead1d6c49e00765de89878288d4b890)
> CVE-2021-4002
> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo at canonical.com>

Acked-by: Kleber Sacilotto de Souza <kleber.souza at canonical.com>

Thanks

> ---
>   mm/hugetlb.c | 23 +++++++++++++++++++----
>   1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 32a2ea7c487a..8c899481c8b7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3915,6 +3915,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   	struct hstate *h = hstate_vma(vma);
>   	unsigned long sz = huge_page_size(h);
>   	struct mmu_notifier_range range;
> +	bool force_flush = false;
>   
>   	WARN_ON(!is_vm_hugetlb_page(vma));
>   	BUG_ON(start & ~huge_page_mask(h));
> @@ -3943,10 +3944,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   		ptl = huge_pte_lock(h, mm, ptep);
>   		if (huge_pmd_unshare(mm, vma, &address, ptep)) {
>   			spin_unlock(ptl);
> -			/*
> -			 * We just unmapped a page of PMDs by clearing a PUD.
> -			 * The caller's TLB flush range should cover this area.
> -			 */
> +			tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
> +			force_flush = true;
>   			continue;
>   		}
>   
> @@ -4003,6 +4002,22 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   	}
>   	mmu_notifier_invalidate_range_end(&range);
>   	tlb_end_vma(tlb, vma);
> +
> +	/*
> +	 * If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
> +	 * could defer the flush until now, since by holding i_mmap_rwsem we
> +	 * guaranteed that the last refernece would not be dropped. But we must
> +	 * do the flushing before we return, as otherwise i_mmap_rwsem will be
> +	 * dropped and the last reference to the shared PMDs page might be
> +	 * dropped as well.
> +	 *
> +	 * In theory we could defer the freeing of the PMD pages as well, but
> +	 * huge_pmd_unshare() relies on the exact page_count for the PMD page to
> +	 * detect sharing, so we cannot defer the release of the page either.
> +	 * Instead, do flush now.
> +	 */
> +	if (force_flush)
> +		tlb_flush_mmu_tlbonly(tlb);
>   }
>   
>   void __unmap_hugepage_range_final(struct mmu_gather *tlb,





More information about the kernel-team mailing list