[3.19.y-ckt stable] Patch "arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables" has been added to the 3.19.y-ckt tree
Kamal Mostafa
kamal at canonical.com
Wed Jul 6 21:00:18 UTC 2016
This is a note to let you know that I have just added a patch titled
arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables
to the linux-3.19.y-queue branch of the 3.19.y-ckt extended stable tree
which can be found at:
https://git.launchpad.net/~canonical-kernel/linux/+git/linux-stable-ckt/log/?h=linux-3.19.y-queue
This patch is scheduled to be released in version 3.19.8-ckt23.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.19.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
---8<------------------------------------------------------------
>From 7d22495d3e8c1f7628ef144ac544e51690e929db Mon Sep 17 00:00:00 2001
From: Marc Zyngier <marc.zyngier at arm.com>
Date: Thu, 28 Apr 2016 16:16:31 +0100
Subject: arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables
commit d4b9e0790aa764c0b01e18d4e8d33e93ba36d51f upstream.
The ARM architecture mandates that when changing a page table entry
from a valid entry to another valid entry, an invalid entry is first
written, TLB invalidated, and only then the new entry being written.
The current code doesn't respect this, directly writing the new
entry and only then invalidating TLBs. Let's fix it up.
Reported-by: Christoffer Dall <christoffer.dall at linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier at arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall at linaro.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
arch/arm/kvm/mmu.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 966f8d2..b4c2d43 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -845,11 +845,14 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
old_pmd = *pmd;
- kvm_set_pmd(pmd, *new_pmd);
- if (pmd_present(old_pmd))
+ if (pmd_present(old_pmd)) {
+ pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
- else
+ } else {
get_page(virt_to_page(pmd));
+ }
+
+ kvm_set_pmd(pmd, *new_pmd);
return 0;
}
@@ -886,12 +889,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
/* Create 2nd stage page table mapping - Level 3 */
old_pte = *pte;
- kvm_set_pte(pte, *new_pte);
- if (pte_present(old_pte))
+ if (pte_present(old_pte)) {
+ kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
- else
+ } else {
get_page(virt_to_page(pte));
+ }
+ kvm_set_pte(pte, *new_pte);
return 0;
}
--
2.7.4
More information about the kernel-team
mailing list