[3.8.y.z extended stable] Patch "fs: buffer: move allocation failure loop into the allocator" has been added to staging queue
Kamal Mostafa
kamal at canonical.com
Tue Oct 29 17:54:50 UTC 2013
This is a note to let you know that I have just added a patch titled
fs: buffer: move allocation failure loop into the allocator
to the linux-3.8.y-queue branch of the 3.8.y.z extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y-queue
This patch is scheduled to be released in version 3.8.13.12.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.8.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
------
>From f53728c348dfe9bcadea103a5306e934e98fc838 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes at cmpxchg.org>
Date: Wed, 16 Oct 2013 13:47:00 -0700
Subject: fs: buffer: move allocation failure loop into the allocator
commit 84235de394d9775bfaa7fa9762a59d91fef0c1fc upstream.
Buffer allocation has a very crude indefinite loop around waking the
flusher threads and performing global NOFS direct reclaim because it can
not handle allocation failures.
The most immediate problem with this is that the allocation may fail due
to a memory cgroup limit, where flushers + direct reclaim might not make
any progress towards resolving the situation at all. Because unlike the
global case, a memory cgroup may not have any cache at all, only
anonymous pages but no swap. This situation will lead to a reclaim
livelock with insane IO from waking the flushers and thrashing unrelated
filesystem cache in a tight loop.
Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
any looping happens in the page allocator, which knows how to
orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
allows memory cgroups to detect allocations that can't handle failure
and will allow them to ultimately bypass the limit if reclaim can not
make progress.
Reported-by: azurIt <azurit at pobox.sk>
Signed-off-by: Johannes Weiner <hannes at cmpxchg.org>
Cc: Michal Hocko <mhocko at suse.cz>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
fs/buffer.c | 14 ++++++++++++--
mm/memcontrol.c | 2 ++
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 7a75c3e..be83882 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -965,9 +965,19 @@ grow_dev_page(struct block_device *bdev, sector_t block,
struct buffer_head *bh;
sector_t end_block;
int ret = 0; /* Will call free_more_memory() */
+ gfp_t gfp_mask;
- page = find_or_create_page(inode->i_mapping, index,
- (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
+ gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
+ gfp_mask |= __GFP_MOVABLE;
+ /*
+ * XXX: __getblk_slow() can not really deal with failure and
+ * will endlessly loop on improvised global reclaim. Prefer
+ * looping in the allocator rather than here, at least that
+ * code knows what it's doing.
+ */
+ gfp_mask |= __GFP_NOFAIL;
+
+ page = find_or_create_page(inode->i_mapping, index, gfp_mask);
if (!page)
return ret;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6b7ff19..b150e66f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2614,6 +2614,8 @@ done:
return 0;
nomem:
*ptr = NULL;
+ if (gfp_mask & __GFP_NOFAIL)
+ return 0;
return -ENOMEM;
bypass:
*ptr = root_mem_cgroup;
--
1.8.1.2
More information about the kernel-team
mailing list