[Bug 1649872] Re: Ceph cluster mds failed during cephfs usage

James Page james.page at ubuntu.com
Wed Jul 5 14:40:17 UTC 2017


I'm unable to reproduce this on s390x either; steps I took to reproduce:

Bootstrap local LXD provider using ZFS backend.

juju deploy -n 3 ceph-mon
juju deploy -n 3 ceph-osd && juju config ceph-osd osd-devices=/srv/osd use-direct-io=False
juju deploy ceph-fs
juju add-relation ceph-mon ceph-osd
juju add-relation ceph-mon ceph-fs

I then mounted the cephfs created from the host system:

sudo mount -t ceph 10.189.59.86:6789:/ /mnt/cephfs -o
name=admin,secret=AQDW91xZWcFvIhAAbWVm3x6xx1LBDgvW7RyP9g==

after which I was able to write files to /mnt/cephfs:

ubuntu at s4lpb:/mnt/cephfs$ ls -lrt
total 314880
-rw-r--r-- 1 root root 322437120 Jul  3 21:54 zesty-server-cloudimg-arm64.img
-rw-r--r-- 1 root root         0 Jul  5 10:36 a
-rw-r--r-- 1 root root         0 Jul  5 10:36 b

ubuntu at s4lpb:/mnt/cephfs$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
[...]
10.189.59.86:6789:/                43G  6.1G   37G  15% /mnt/cephfs


Confirming kernel version:

$ uname -a
Linux s4lpb 4.4.0-67-generic #88-Ubuntu SMP Wed Mar 8 16:39:07 UTC 2017 s390x s390x s390x GNU/Linux
ubuntu at s4lpb:/mnt/cephfs$ 


** Changed in: ceph (Ubuntu)
       Status: Confirmed => Incomplete

** Changed in: ubuntu-z-systems
       Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1649872

Title:
  Ceph cluster mds failed during cephfs usage

Status in Ubuntu on IBM z Systems:
  Incomplete
Status in ceph package in Ubuntu:
  Incomplete

Bug description:
  Ceph cluster mds failed during cephfs usage
   
  ---uname output---
  Linux testU 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
   
  ---Additional Hardware Info---
  System Z s390x LPAR 

   
  Machine Type = Ubuntu VM on s390x LPAR 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   On s390x LPAR 4 Ubuntu VMs:
  1VM - ceph monitor, ceph mds
  2VM - ceph monitor, ceph osd
  3VM - ceph monitor, ceph osd
  4VM - client for using cephfs
  I installed ceph cluster on 3 VMs and use 4d VM as a client for cephfs. Mount cephfs share and trying to touch some file in mount point:
  root at testU:~# ceph osd pool ls
  rbd
  libvirt-pool
  root at testU:~# ceph osd pool create cephfs1_data 32
  pool 'cephfs1_data' created
  root at testU:~# ceph osd pool create cephfs1_metadata 32
  pool 'cephfs1_metadata' created
  root at testU:~# ceph osd pool ls
  rbd
  libvirt-pool
  cephfs1_data
  cephfs1_metadata
  root at testU:~# ceph fs new cephfs1 cephfs1_metadata cephfs1_data
  new fs with metadata pool 5 and data pool 4
  root at testU:~# ceph fs ls 
  name: cephfs1, metadata pool: cephfs1_metadata, data pools: [cephfs1_data ]

  
  root at testU:~# ceph mds stat
  e37: 1/1/1 up {2:0=mon1=up:active}
  root at testU:~# ceph -s
      cluster 9f054e62-10e5-4b58-adb9-03d27a360bdc
       health HEALTH_OK
       monmap e1: 3 mons at {mon1=192.168.122.144:6789/0,osd1=192.168.122.233:6789/0,osd2=192.168.122.73:6789/0}
              election epoch 4058, quorum 0,1,2 osd2,mon1,osd1
        fsmap e37: 1/1/1 up {2:0=mon1=up:active}
       osdmap e62: 2 osds: 2 up, 2 in
              flags sortbitwise
        pgmap v2011: 256 pgs, 4 pools, 4109 MB data, 1318 objects
              12371 MB used, 18326 MB / 30698 MB avail
                   256 active+clean

  
  root at testU:~# ceph auth get client.admin | grep key
  exported keyring for client.admin
  	key = AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==
  root at testU:~# mount -t ceph 192.168.122.144:6789:/ /mnt/cephfs -o name=admin,secret=AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==,context="system_u:object_r:tmp_t:s0"
  root at testU:~# 
  root at testU:~# mount |grep ceph
  192.168.122.144:6789:/ on /mnt/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)

  
  root at testU:~# ls -l /mnt/cephfs/
  total 0
  root at testU:~# touch /mnt/cephfs/testfile
  [  759.865289] ceph: mds parse_reply err -5
  [  759.865293] ceph: mdsc_handle_reply got corrupt reply mds0(tid:2)
  root at testU:~# ls -l /mnt/cephfs/
  [  764.600952] ceph: mds parse_reply err -5
  [  764.600955] ceph: mdsc_handle_reply got corrupt reply mds0(tid:5)
  [  764.601343] ceph: mds parse_reply err -5
  [  764.601345] ceph: mdsc_handle_reply got corrupt reply mds0(tid:6)
  ls: reading directory '/mnt/cephfs/': Input/output error
  total 0

   
  Userspace tool common name: cephfs ceph 
   
  The userspace tool has the following bit modes: 64-bit 

  Userspace rpm: -

  Userspace tool obtained from project website:  na

  -Attach ltrace and strace of userspace application.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1649872/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list