[SRU][Xenial][PATCH 0/6] Backport Ceph CRUSH_TUNABLES5 support
Billy Olsen
billy.olsen at canonical.com
Wed Nov 1 20:37:47 UTC 2017
From: Billy Olsen <billy.olsen at gmail.com>
BugLink: https://bugs.launchpad.net/bugs/1728739
[Impact]
Attempting to use the kernel rbd driver to mount images from a
Jewel (Xenial) or Luminous (Artful & Pike Ubuntu Cloud Archive)
server causes the mount to fail. This is due to Ceph's addition
of new CRUSH_TUNABLES5 feature which is not understood by the
4.4 kernel client.
[Fix]
Backport the 5 original patches (clean cherry-picks) that added
CRUSH_TUNABLES5 support in the upstream 4.5 kernel. Also backport
an additional patch that allows the client to understand the new
v7 format of the MOSDOpReply message.
- https://www.spinics.net/lists/ceph-devel/msg28421.html
- https://www.spinics.net/lists/ceph-devel/msg28458.html
[Test Case]
1. Deploy a Jewel/Luminous Ceph Cluster with crush tunables set
to optimal.
2. Create an RBD image suitable for kernel client:
$ rbd create --pool rbd --image-feature layering --size 1G test
3. Map rbd device to local server node:
$ rbd map --pool rbd test
[Regression Potential]
Minimal. Code is limited to kernel rbd driver and new code should
primarily affect clients connecting to clusters with the new
tunables options.
[Notes]
Only applicable to Xenial LTS kernel 4.4 since code was included
in the 4.5 kernel.
Ilya Dryomov (6):
crush: ensure bucket id is valid before indexing buckets array
crush: ensure take bucket value is valid
crush: add chooseleaf_stable tunable
crush: decode and initialize chooseleaf_stable
libceph: advertise support for TUNABLES5
libceph: MOSDOpReply v7 encoding
include/linux/ceph/ceph_features.h | 16 +++++++++++++++-
include/linux/crush/crush.h | 8 +++++++-
net/ceph/crush/mapper.c | 33 ++++++++++++++++++++++++++-------
net/ceph/osd_client.c | 10 ++++++++++
net/ceph/osdmap.c | 19 ++++++++++++++-----
5 files changed, 72 insertions(+), 14 deletions(-)
--
2.14.1
More information about the kernel-team
mailing list