[Bug 1831733] Re: ledmon incorrectly sets the status LED
Mathew Hodson
mathew.hodson at gmail.com
Sun Sep 1 03:28:10 UTC 2019
** Changed in: ledmon (Ubuntu)
Importance: Undecided => Low
--
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1831733
Title:
ledmon incorrectly sets the status LED
Status in OEM Priority Project:
Confirmed
Status in ledmon package in Ubuntu:
Confirmed
Bug description:
Description:
After creating the RAID volume, deleting it and creating second RAID
volume (with same disks as with first volume but with less disks'
count), LED statuses on disks left in container are ‘failure’.
Steps to reproduce:
1. Turn on ledmon:
# ledmon --all
2. Create RAID container:
# mdadm --create /dev/md/imsm0 --metadata=imsm --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force
3. Create first RAID volume:
# mdadm --create /dev/md/Volume --level=5 --chunk 64 --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force
4. Stop first RAID volume:
# mdadm --stop /dev/md/Volume
5. Delete first RAID volume:
# mdadm --kill-subarray=0 /dev/md127
6. Create second RAID volume in the same container (with less disks' count than first RAID, using the sane disks as in the first volume):
# mdadm --create /dev/md/Volume --level=1 --raid-devices=2 /dev/nvme5n1 /dev/nvme4n1 --run
7. Verify status LED on container member disks which are not part
of second RAID volume.
Expected results:
Disks from container which are not in the second volume should have ‘normal’ status LED.
Actual results:
Disks from container which are not in the second volume have ‘failure’ status LED.
To manage notifications about this bug go to:
https://bugs.launchpad.net/oem-priority/+bug/1831733/+subscriptions
More information about the Ubuntu-sponsors
mailing list