[3.13.y.z extended stable] Patch "mm, slab: initialize object alignment on cache creation" has been added to staging queue

Kamal Mostafa kamal at canonical.com
Thu Oct 9 20:51:47 UTC 2014

This is a note to let you know that I have just added a patch titled

    mm, slab: initialize object alignment on cache creation

to the linux-3.13.y-queue branch of the 3.13.y.z extended stable tree 
which can be found at:


This patch is scheduled to be released in version

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.13.y.z tree, see



>From a7defb500630beefd82f9923f3302dc823cde18a Mon Sep 17 00:00:00 2001
From: David Rientjes <rientjes at google.com>
Date: Thu, 25 Sep 2014 16:05:20 -0700
Subject: mm, slab: initialize object alignment on cache creation

commit d4a5fca592b9ab52b90bb261a90af3c8f53be011 upstream.

Since commit 4590685546a3 ("mm/sl[aou]b: Common alignment code"), the
"ralign" automatic variable in __kmem_cache_create() may be used as

The proper alignment defaults to BYTES_PER_WORD and can be overridden by
SLAB_RED_ZONE or the alignment specified by the caller.

This fixes https://bugzilla.kernel.org/show_bug.cgi?id=85031

Signed-off-by: David Rientjes <rientjes at google.com>
Reported-by: Andrei Elovikov <a.elovikov at gmail.com>
Acked-by: Christoph Lameter <cl at linux.com>
Cc: Pekka Enberg <penberg at kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
 mm/slab.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index eb043bf..f985e8f 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2139,7 +2139,8 @@ static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
 __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
-	size_t left_over, freelist_size, ralign;
+	size_t left_over, freelist_size;
+	size_t ralign = BYTES_PER_WORD;
 	gfp_t gfp;
 	int err;
 	size_t size = cachep->size;
@@ -2172,14 +2173,6 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
 		size &= ~(BYTES_PER_WORD - 1);

-	/*
-	 * Redzoning and user store require word alignment or possibly larger.
-	 * Note this will be overridden by architecture or caller mandated
-	 * alignment if either is greater than BYTES_PER_WORD.
-	 */
-	if (flags & SLAB_STORE_USER)
-		ralign = BYTES_PER_WORD;
 	if (flags & SLAB_RED_ZONE) {
 		ralign = REDZONE_ALIGN;
 		/* If redzoning, ensure that the second redzone is suitably

More information about the kernel-team mailing list