[PATCH 1/3] mm: move tlb_table_flush to tlb_flush_mmu_free

Tyler Hicks tyhicks at canonical.com
Fri Oct 19 22:38:19 UTC 2018

From: Nicholas Piggin <npiggin at gmail.com>

BugLink: https://launchpad.net/bugs/1798897

There is no need to call this from tlb_flush_mmu_tlbonly, it logically
belongs with tlb_flush_mmu_free.  This makes future fixes simpler.

[ This was originally done to allow code consolidation for the
  mmu_notifier fix, but it also ends up helping simplify the

Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
Acked-by: Will Deacon <will.deacon at arm.com>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: stable at kernel.org
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
(cherry picked from commit db7ddef301128dad394f1c0f77027f86ee9a4edb)
Signed-off-by: Tyler Hicks <tyhicks at canonical.com>
 mm/memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c865ec4f62c6..c5928cda7748 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -246,9 +246,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
-	tlb_table_flush(tlb);
@@ -256,6 +253,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
 	struct mmu_gather_batch *batch;
+	tlb_table_flush(tlb);
 	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
 		free_pages_and_swap_cache(batch->pages, batch->nr);
 		batch->nr = 0;

More information about the kernel-team mailing list