[PATCH 0/1][SAUCY][SRU] eCryptfs 32 bit large file corruption fix

Tyler Hicks tyhicks at canonical.com
Fri Oct 25 00:23:45 UTC 2013


[SRU Justification]

Commit 24d15266bd86b7961f309a962fa3aa177a78c49f introduced a data corruption
regression on 32 bit architectures when writing past the 4 GB.

[Impact]

32 bit users experience corruption of large files.

[Fix]

A cast is needed when shifting the page's index. Colin and I independently
identified the problem. It is a simple fix that is currently located in the
eCryptfs next branch:

http://git.kernel.org/cgit/linux/kernel/git/tyhicks/ecryptfs.git/commit/?h=next&id=43b7c6c6a4e3916edd186ceb61be0c67d1e0969e

I've sent a pull request to Linus, but he has not yet had a chance to pull in
the change:

https://lkml.org/lkml/2013/10/24/424

[Test Case]

Inside of an eCryptfs mount on an i686 Ubuntu install, create a file containing
4 GB + 1 page worth (4096 bytes) of zeros. Then inspect the file for non-zero
bytes.

$ rm zeros
$ dd if=/dev/zero of=zeros bs=4096 count=$((4*1024*1024*1024/4096+4096))
1052672+0 records in
1052672+0 records out
4311744512 bytes (4.3 GB) copied, 226.133 s, 19.1 MB/s
$ hexdump -C zeros
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
101000000

The hexdump output should show all zeros. A non patched kernel will show
non-zero bytes.

Notes:

I've tested this on amd64 and i686 using the manual test mentioned above, the
eCryptfs regression test suite, and by performing kernel builds inside of the
eCryptfs mount.

Tyler




More information about the kernel-team mailing list