[Bug 1649872] dmesg.log from client VM with ceph package version 10.2.3

bugproxy bugproxy at us.ibm.com
Mon Jan 9 13:19:29 UTC 2017


------- Comment (attachment only) From FOKIN at de.ibm.com 2016-12-19 05:20 EDT-------


** Attachment added: "dmesg.log from client VM with ceph package version 10.2.3"
   https://bugs.launchpad.net/bugs/1649872/+attachment/4801875/+files/dmesg_10_2_3.log

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1649872

Title:
  Ceph cluster mds failed during cephfs usage

Status in Ubuntu on IBM z Systems:
  New
Status in ceph package in Ubuntu:
  Incomplete

Bug description:
  Ceph cluster mds failed during cephfs usage
   
  ---uname output---
  Linux testU 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
   
  ---Additional Hardware Info---
  System Z s390x LPAR 

   
  Machine Type = Ubuntu VM on s390x LPAR 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   On s390x LPAR 4 Ubuntu VMs:
  1VM - ceph monitor, ceph mds
  2VM - ceph monitor, ceph osd
  3VM - ceph monitor, ceph osd
  4VM - client for using cephfs
  I installed ceph cluster on 3 VMs and use 4d VM as a client for cephfs. Mount cephfs share and trying to touch some file in mount point:
  root at testU:~# ceph osd pool ls
  rbd
  libvirt-pool
  root at testU:~# ceph osd pool create cephfs1_data 32
  pool 'cephfs1_data' created
  root at testU:~# ceph osd pool create cephfs1_metadata 32
  pool 'cephfs1_metadata' created
  root at testU:~# ceph osd pool ls
  rbd
  libvirt-pool
  cephfs1_data
  cephfs1_metadata
  root at testU:~# ceph fs new cephfs1 cephfs1_metadata cephfs1_data
  new fs with metadata pool 5 and data pool 4
  root at testU:~# ceph fs ls 
  name: cephfs1, metadata pool: cephfs1_metadata, data pools: [cephfs1_data ]

  
  root at testU:~# ceph mds stat
  e37: 1/1/1 up {2:0=mon1=up:active}
  root at testU:~# ceph -s
      cluster 9f054e62-10e5-4b58-adb9-03d27a360bdc
       health HEALTH_OK
       monmap e1: 3 mons at {mon1=192.168.122.144:6789/0,osd1=192.168.122.233:6789/0,osd2=192.168.122.73:6789/0}
              election epoch 4058, quorum 0,1,2 osd2,mon1,osd1
        fsmap e37: 1/1/1 up {2:0=mon1=up:active}
       osdmap e62: 2 osds: 2 up, 2 in
              flags sortbitwise
        pgmap v2011: 256 pgs, 4 pools, 4109 MB data, 1318 objects
              12371 MB used, 18326 MB / 30698 MB avail
                   256 active+clean

  
  root at testU:~# ceph auth get client.admin | grep key
  exported keyring for client.admin
  	key = AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==
  root at testU:~# mount -t ceph 192.168.122.144:6789:/ /mnt/cephfs -o name=admin,secret=AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==,context="system_u:object_r:tmp_t:s0"
  root at testU:~# 
  root at testU:~# mount |grep ceph
  192.168.122.144:6789:/ on /mnt/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)

  
  root at testU:~# ls -l /mnt/cephfs/
  total 0
  root at testU:~# touch /mnt/cephfs/testfile
  [  759.865289] ceph: mds parse_reply err -5
  [  759.865293] ceph: mdsc_handle_reply got corrupt reply mds0(tid:2)
  root at testU:~# ls -l /mnt/cephfs/
  [  764.600952] ceph: mds parse_reply err -5
  [  764.600955] ceph: mdsc_handle_reply got corrupt reply mds0(tid:5)
  [  764.601343] ceph: mds parse_reply err -5
  [  764.601345] ceph: mdsc_handle_reply got corrupt reply mds0(tid:6)
  ls: reading directory '/mnt/cephfs/': Input/output error
  total 0

   
  Userspace tool common name: cephfs ceph 
   
  The userspace tool has the following bit modes: 64-bit 

  Userspace rpm: -

  Userspace tool obtained from project website:  na

  -Attach ltrace and strace of userspace application.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1649872/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list