Ceph deployment

James Page james.page at ubuntu.com
Thu Nov 19 22:14:17 UTC 2015


Hi Pshem

On Thu, Nov 19, 2015 at 10:04 PM, Pshem Kowalczyk <pshem.k at gmail.com> wrote:

> Hi,
>
> I'm trying to deploy ceph and ceph-osd, however with this config:
>
> ceph:
>  source: cloud:trusty-liberty
>  fsid: 015cc90c-8f06-11e5-be28-0050569axxxx
>  monitor-secret: AQB3QU5WiW3GEhAAVLK19SNzR46kXXXXXXX==
>  osd-devices: /dev/sdb
>  osd-reformat: 'yes'
>
> ceph-osd:
>  source: cloud:trusty-liberty
>  osd-devices: /dev/sdb
>  osd-reformat: 'yes'
>
> and a relation between ceph and ceph-osd I end up with status:
> No block devices detected using current configuration
>
> The devices are there and a closer inspection of ceph setup reveals that
> the keys are not copied onto the ceph-osd nodes and ceph is failing with:
>
> ERROR: osd init failed: (1) Operation not permitted
>

No keys in /etc/ceph is actually as intended - the OSD's use a special
bootstrap key in /var/lib/ceph/bootstrap-osd


> the only relation I have is ceph-osd:mon ceph:osd
>

That should be fine.


>
> ceph -s on the mon nodes gives:
>
> # ceph -s
>     cluster 015cc90c-8f06-11e5-be28-0050569a302e
>      health HEALTH_ERR
>             464 pgs stuck inactive
>             464 pgs stuck unclean
>             no osds
>      monmap e1: 3 mons at {juju-machine-0-lxc-25=
> 10.0.11.79:6789/0,juju-machine-0-lxc-26=10.0.11.106:6789/0,juju-machine-0-lxc-27=10.0.11.107:6789/0
> }
>             election epoch 4, quorum 0,1,2
> juju-machine-0-lxc-25,juju-machine-0-lxc-26,juju-machine-0-lxc-27
>      osdmap e5: 0 osds: 0 up, 0 in
>       pgmap v6: 464 pgs, 3 pools, 0 bytes data, 0 objects
>             0 kB used, 0 kB / 0 kB avail
>                  464 creating
>

 Looking at this output

1) the mon cluster bootstrapped OK - which is good
2) you're running the ceph charm under LXC containers - which is unusual -
the ceph charm is a superset of the ceph-osd charm function, so is normally
run on hardware as well - but typically just 3 units.

Could you provide the output of 'juju status' so we can see how you have
the charms laid out in the deployment? That might help.

Cheers

James
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20151119/43f502b6/attachment.html>


More information about the Juju mailing list