[Bug 1649872] Re: Ceph cluster mds failed during cephfs usage
James Page
james.page at ubuntu.com
Wed Jul 5 14:44:08 UTC 2017
For completeness:
$ juju status
Model Controller Cloud/Region Version
default localhost lxd/localhost 2.0.2
App Version Status Scale Charm Store Rev OS Notes
ceph-fs 10.2.7 active 1 ceph-fs jujucharms 3 ubuntu
ceph-mon 10.2.7 active 3 ceph-mon jujucharms 9 ubuntu
ceph-osd 10.2.7 active 3 ceph-osd jujucharms 243 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-fs/0* active idle 6 10.189.59.58 Unit is ready (1 MDS)
ceph-mon/0* active idle 0 10.189.59.86 Unit is ready and clustered
ceph-mon/1 active idle 1 10.189.59.156 Unit is ready and clustered
ceph-mon/2 active idle 2 10.189.59.246 Unit is ready and clustered
ceph-osd/0* active idle 3 10.189.59.93 Unit is ready (1 OSD)
ceph-osd/1 active idle 4 10.189.59.82 Unit is ready (1 OSD)
ceph-osd/2 active idle 5 10.189.59.204 Unit is ready (1 OSD)
Machine State DNS Inst id Series AZ
0 started 10.189.59.86 juju-0e3568-0 xenial
1 started 10.189.59.156 juju-0e3568-1 xenial
2 started 10.189.59.246 juju-0e3568-2 xenial
3 started 10.189.59.93 juju-0e3568-3 xenial
4 started 10.189.59.82 juju-0e3568-4 xenial
5 started 10.189.59.204 juju-0e3568-5 xenial
6 started 10.189.59.58 juju-0e3568-6 xenial
Relation Provides Consumes Type
mds ceph-fs ceph-mon regular
mon ceph-mon ceph-mon peer
mon ceph-mon ceph-osd regular
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1649872
Title:
Ceph cluster mds failed during cephfs usage
Status in Ubuntu on IBM z Systems:
Incomplete
Status in ceph package in Ubuntu:
Incomplete
Bug description:
Ceph cluster mds failed during cephfs usage
---uname output---
Linux testU 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
---Additional Hardware Info---
System Z s390x LPAR
Machine Type = Ubuntu VM on s390x LPAR
---Debugger---
A debugger is not configured
---Steps to Reproduce---
On s390x LPAR 4 Ubuntu VMs:
1VM - ceph monitor, ceph mds
2VM - ceph monitor, ceph osd
3VM - ceph monitor, ceph osd
4VM - client for using cephfs
I installed ceph cluster on 3 VMs and use 4d VM as a client for cephfs. Mount cephfs share and trying to touch some file in mount point:
root at testU:~# ceph osd pool ls
rbd
libvirt-pool
root at testU:~# ceph osd pool create cephfs1_data 32
pool 'cephfs1_data' created
root at testU:~# ceph osd pool create cephfs1_metadata 32
pool 'cephfs1_metadata' created
root at testU:~# ceph osd pool ls
rbd
libvirt-pool
cephfs1_data
cephfs1_metadata
root at testU:~# ceph fs new cephfs1 cephfs1_metadata cephfs1_data
new fs with metadata pool 5 and data pool 4
root at testU:~# ceph fs ls
name: cephfs1, metadata pool: cephfs1_metadata, data pools: [cephfs1_data ]
root at testU:~# ceph mds stat
e37: 1/1/1 up {2:0=mon1=up:active}
root at testU:~# ceph -s
cluster 9f054e62-10e5-4b58-adb9-03d27a360bdc
health HEALTH_OK
monmap e1: 3 mons at {mon1=192.168.122.144:6789/0,osd1=192.168.122.233:6789/0,osd2=192.168.122.73:6789/0}
election epoch 4058, quorum 0,1,2 osd2,mon1,osd1
fsmap e37: 1/1/1 up {2:0=mon1=up:active}
osdmap e62: 2 osds: 2 up, 2 in
flags sortbitwise
pgmap v2011: 256 pgs, 4 pools, 4109 MB data, 1318 objects
12371 MB used, 18326 MB / 30698 MB avail
256 active+clean
root at testU:~# ceph auth get client.admin | grep key
exported keyring for client.admin
key = AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==
root at testU:~# mount -t ceph 192.168.122.144:6789:/ /mnt/cephfs -o name=admin,secret=AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==,context="system_u:object_r:tmp_t:s0"
root at testU:~#
root at testU:~# mount |grep ceph
192.168.122.144:6789:/ on /mnt/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)
root at testU:~# ls -l /mnt/cephfs/
total 0
root at testU:~# touch /mnt/cephfs/testfile
[ 759.865289] ceph: mds parse_reply err -5
[ 759.865293] ceph: mdsc_handle_reply got corrupt reply mds0(tid:2)
root at testU:~# ls -l /mnt/cephfs/
[ 764.600952] ceph: mds parse_reply err -5
[ 764.600955] ceph: mdsc_handle_reply got corrupt reply mds0(tid:5)
[ 764.601343] ceph: mds parse_reply err -5
[ 764.601345] ceph: mdsc_handle_reply got corrupt reply mds0(tid:6)
ls: reading directory '/mnt/cephfs/': Input/output error
total 0
Userspace tool common name: cephfs ceph
The userspace tool has the following bit modes: 64-bit
Userspace rpm: -
Userspace tool obtained from project website: na
-Attach ltrace and strace of userspace application.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1649872/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list