ACK/cmnt: [CVE-2017-1000405 trusty] mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
Thadeu Lima de Souza Cascardo
cascardo at canonical.com
Mon Dec 4 10:27:48 UTC 2017
On Mon, Dec 04, 2017 at 11:25:55AM +0100, Stefan Bader wrote:
> On 01.12.2017 22:24, Thadeu Lima de Souza Cascardo wrote:
> > From: "Kirill A. Shutemov" <kirill.shutemov at linux.intel.com>
> >
> > Currently, we unconditionally make page table dirty in touch_pmd().
> > It may result in false-positive can_follow_write_pmd().
> >
> > We may avoid the situation, if we would only make the page table entry
> > dirty if caller asks for write access -- FOLL_WRITE.
> >
> > The patch also changes touch_pud() in the same way.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
> > Cc: Michal Hocko <mhocko at suse.com>
> > Cc: Hugh Dickins <hughd at google.com>
> > Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
> > (backported from commit a8f97366452ed491d13cf1e44241bc0b5740b1f0)
> > [cascardo: dropped touch_pud parts]
> > [cascardo: fix in-place of where touch_pmd would be called]
> > CVE-2017-1000405
> > Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo at canonical.com>
> Acked-by: Stefan Bader <stefan.bader at canonical.com>
>
> > ---
> >
> > Reproducer has been tested. It "exploits" without the patch. With the patch, it
> > fails.
> >
> > touch_pmd was introduced later as a helper. Now, we have only one call site as
> > follow_devmap_pmd was not on xenial or trusty yet. So, fixing it in place is
> > just as fine.
>
> Looks good, just out of curiosity: is that the same patch as for Xenial or
> slightly context adjusted (which I fail to notice)? And tested.
>
> -Stefan
Yes, just context adjustment. Couldn't apply the one for xenial directly
to trusty.
Cascardo.
>
> >
> > Cascardo.
> >
> > ---
> > mm/huge_memory.c | 14 ++++----------
> > 1 file changed, 4 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 43332139f646..0ac4174cc690 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1265,17 +1265,11 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
> > VM_BUG_ON(!PageHead(page));
> > if (flags & FOLL_TOUCH) {
> > pmd_t _pmd;
> > - /*
> > - * We should set the dirty bit only for FOLL_WRITE but
> > - * for now the dirty bit in the pmd is meaningless.
> > - * And if the dirty bit will become meaningful and
> > - * we'll only set it with FOLL_WRITE, an atomic
> > - * set_bit will be required on the pmd to set the
> > - * young bit, instead of the current set_pmd_at.
> > - */
> > - _pmd = pmd_mkyoung(pmd_mkdirty(*pmd));
> > + _pmd = pmd_mkyoung(*pmd);
> > + if (flags & FOLL_WRITE)
> > + _pmd = pmd_mkdirty(_pmd);
> > if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
> > - pmd, _pmd, 1))
> > + pmd, _pmd, flags & FOLL_WRITE))
> > update_mmu_cache_pmd(vma, addr, pmd);
> > }
> > if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
> >
>
>
More information about the kernel-team
mailing list