[Bug 1831733] Re: ledmon incorrectly sets the status LED
Pawel Baldysiak
1831733 at bugs.launchpad.net
Fri Mar 13 07:52:51 UTC 2020
Hi,
Are there any plans to fix it in bionic for which this issue was originally reported?
Thanks
Pawel
--
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1831733
Title:
ledmon incorrectly sets the status LED
Status in OEM Priority Project:
Confirmed
Status in ledmon package in Ubuntu:
Fix Released
Bug description:
Description:
After creating the RAID volume, deleting it and creating second RAID
volume (with same disks as with first volume but with less disks'
count), LED statuses on disks left in container are ‘failure’.
Steps to reproduce:
1. Turn on ledmon:
# ledmon --all
2. Create RAID container:
# mdadm --create /dev/md/imsm0 --metadata=imsm --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force
3. Create first RAID volume:
# mdadm --create /dev/md/Volume --level=5 --chunk 64 --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force
4. Stop first RAID volume:
# mdadm --stop /dev/md/Volume
5. Delete first RAID volume:
# mdadm --kill-subarray=0 /dev/md127
6. Create second RAID volume in the same container (with less disks' count than first RAID, using the sane disks as in the first volume):
# mdadm --create /dev/md/Volume --level=1 --raid-devices=2 /dev/nvme5n1 /dev/nvme4n1 --run
7. Verify status LED on container member disks which are not part
of second RAID volume.
Expected results:
Disks from container which are not in the second volume should have ‘normal’ status LED.
Actual results:
Disks from container which are not in the second volume have ‘failure’ status LED.
To manage notifications about this bug go to:
https://bugs.launchpad.net/oem-priority/+bug/1831733/+subscriptions
More information about the Ubuntu-sponsors
mailing list