[ 3.5.y.z extended stable ] Patch "tmpfs: fix shmem_getpage_gfp() VM_BUG_ON" has been added to staging queue
Herton Ronaldo Krzesinski
herton.krzesinski at canonical.com
Fri Dec 7 16:06:09 UTC 2012
This is a note to let you know that I have just added a patch titled
tmpfs: fix shmem_getpage_gfp() VM_BUG_ON
to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.5.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Herton
------
>From ccb8539a11acffd2012be03ac3dc70ae4597bf62 Mon Sep 17 00:00:00 2001
From: Hugh Dickins <hughd at google.com>
Date: Fri, 16 Nov 2012 14:15:03 -0800
Subject: [PATCH] tmpfs: fix shmem_getpage_gfp() VM_BUG_ON
commit 215c02bc33bbd5ff4d7379a909462d11f0103218 upstream.
Fuzzing with trinity hit the "impossible" VM_BUG_ON(error) (which Fedora
has converted to WARNING) in shmem_getpage_gfp():
WARNING: at mm/shmem.c:1151 shmem_getpage_gfp+0xa5c/0xa70()
Pid: 29795, comm: trinity-child4 Not tainted 3.7.0-rc2+ #49
Call Trace:
warn_slowpath_common+0x7f/0xc0
warn_slowpath_null+0x1a/0x20
shmem_getpage_gfp+0xa5c/0xa70
shmem_fault+0x4f/0xa0
__do_fault+0x71/0x5c0
handle_pte_fault+0x97/0xae0
handle_mm_fault+0x289/0x350
__do_page_fault+0x18e/0x530
do_page_fault+0x2b/0x50
page_fault+0x28/0x30
tracesys+0xe1/0xe6
Thanks to Johannes for pointing to truncation: free_swap_and_cache()
only does a trylock on the page, so the page lock we've held since
before confirming swap is not enough to protect against truncation.
What cleanup is needed in this case? Just delete_from_swap_cache(),
which takes care of the memcg uncharge.
Signed-off-by: Hugh Dickins <hughd at google.com>
Reported-by: Dave Jones <davej at redhat.com>
Cc: Johannes Weiner <hannes at cmpxchg.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski at canonical.com>
---
mm/shmem.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 06d48ca..c7f7a77 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1154,8 +1154,20 @@ repeat:
if (!error) {
error = shmem_add_to_page_cache(page, mapping, index,
gfp, swp_to_radix_entry(swap));
- /* We already confirmed swap, and make no allocation */
- VM_BUG_ON(error);
+ /*
+ * We already confirmed swap under page lock, and make
+ * no memory allocation here, so usually no possibility
+ * of error; but free_swap_and_cache() only trylocks a
+ * page, so it is just possible that the entry has been
+ * truncated or holepunched since swap was confirmed.
+ * shmem_undo_range() will have done some of the
+ * unaccounting, now delete_from_swap_cache() will do
+ * the rest (including mem_cgroup_uncharge_swapcache).
+ * Reset swap.val? No, leave it so "failed" goes back to
+ * "repeat": reading a hole and writing should succeed.
+ */
+ if (error)
+ delete_from_swap_cache(page);
}
if (error)
goto failed;
--
1.7.9.5
More information about the kernel-team
mailing list