[3.19.y-ckt stable] Patch "ext4: allocate entire range in zero range" has been added to staging queue
Kamal Mostafa
kamal at canonical.com
Thu May 21 20:36:26 UTC 2015
This is a note to let you know that I have just added a patch titled
ext4: allocate entire range in zero range
to the linux-3.19.y-queue branch of the 3.19.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.19.y-queue
This patch is scheduled to be released in version 3.19.8-ckt1.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.19.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
------
>From dd0d06927a3ac4834febc827a6baf88a5cfcf001 Mon Sep 17 00:00:00 2001
From: Lukas Czerner <lczerner at redhat.com>
Date: Fri, 3 Apr 2015 00:09:13 -0400
Subject: ext4: allocate entire range in zero range
commit 0f2af21aae11972fa924374ddcf52e88347cf5a8 upstream.
Currently there is a bug in zero range code which causes zero range
calls to only allocate block aligned portion of the range, while
ignoring the rest in some cases.
In some cases, namely if the end of the range is past i_size, we do
attempt to preallocate the last nonaligned block. However this might
cause kernel to BUG() in some carefully designed zero range requests
on setups where page size > block size.
Fix this problem by first preallocating the entire range, including
the nonaligned edges and converting the written extents to unwritten
in the next step. This approach will also give us the advantage of
having the range to be as linearly contiguous as possible.
Signed-off-by: Lukas Czerner <lczerner at redhat.com>
Signed-off-by: Theodore Ts'o <tytso at mit.edu>
Reference: CVE-2015-0275
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
fs/ext4/extents.c | 31 +++++++++++++++++++------------
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index bed4308..aa52242 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4803,12 +4803,6 @@ static long ext4_zero_range(struct file *file, loff_t offset,
else
max_blocks -= lblk;
- flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT |
- EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
- EXT4_EX_NOCACHE;
- if (mode & FALLOC_FL_KEEP_SIZE)
- flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
-
mutex_lock(&inode->i_mutex);
/*
@@ -4825,15 +4819,28 @@ static long ext4_zero_range(struct file *file, loff_t offset,
ret = inode_newsize_ok(inode, new_size);
if (ret)
goto out_mutex;
- /*
- * If we have a partial block after EOF we have to allocate
- * the entire block.
- */
- if (partial_end)
- max_blocks += 1;
}
+ flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT;
+ if (mode & FALLOC_FL_KEEP_SIZE)
+ flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
+
+ /* Preallocate the range including the unaligned edges */
+ if (partial_begin || partial_end) {
+ ret = ext4_alloc_file_blocks(file,
+ round_down(offset, 1 << blkbits) >> blkbits,
+ (round_up((offset + len), 1 << blkbits) -
+ round_down(offset, 1 << blkbits)) >> blkbits,
+ new_size, flags, mode);
+ if (ret)
+ goto out_mutex;
+
+ }
+
+ /* Zero range excluding the unaligned edges */
if (max_blocks > 0) {
+ flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
+ EXT4_EX_NOCACHE);
/* Now release the pages and zero block aligned part of pages*/
truncate_pagecache_range(inode, start, end - 1);
--
1.9.1
More information about the kernel-team
mailing list