[Bug 1914911] Update Released
Łukasz Zemczak
1914911 at bugs.launchpad.net
Thu May 6 09:00:38 UTC 2021
The verification of the Stable Release Update for ceph has completed
successfully and the package is now being released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report. In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1914911
Title:
[SRU] bluefs doesn't compact log file
Status in Ubuntu Cloud Archive:
Invalid
Status in Ubuntu Cloud Archive queens series:
Fix Committed
Status in ceph package in Ubuntu:
Invalid
Status in ceph source package in Bionic:
Fix Released
Bug description:
[Impact]
For a certain type of workload, the bluefs might never compact the log
file, which would cause the bluefs log file slowly grows to a huge
size (some bigger than 1TB for a 1.5T device).
There are more details in the bluefs perf counters when this issue happened:
e.g.
"bluefs": {
"gift_bytes": 811748818944,
"reclaim_bytes": 0,
"db_total_bytes": 888564350976,
"db_used_bytes": 867311747072,
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"slow_total_bytes": 0,
"slow_used_bytes": 0,
"num_files": 11,
"log_bytes": 866545131520,
"log_compactions": 0,
"logged_bytes": 866542977024,
"files_written_wal": 2,
"files_written_sst": 3,
"bytes_written_wal": 32424281934,
"bytes_written_sst": 25382201
}
This bug could eventually cause osd crash and failed to restart as it couldn't get through the bluefs replay phase during boot time.
We might see below log when trying to restart the osd:
bluefs mount failed to replay log: (5) Input/output error
As we can see the log_compactions is 0, which means it's never
compacted and the log file size(log_bytes) is already 800+G. After the
compaction, the log file size would need to be reduced to around 1G.
[Test Case]
Deploy a test ceph cluster (Luminous 12.2.13 which has the bug) and
drive I/O. The compaction doesn't get triggered often when most I/O
are reads. So fill up the cluster initially with lots of writes and
then start reading heavy reads (no writes). Then the problem should
occur. Smaller sized OSDs are OK as we'are only interested filling up
the OSD and grow the bluefs log.
[Where problems could occur]
This fix has been part of all upstream releases since Mimic, so there's been quite good "runtime".
The changes ensure that compaction happens more often. But that's not going to cause any problem.
I can't see any real problems.
[Other Info]
- It's only needed for Luminous (Bionic). All new releases since have this already.
- Upstream master PR: https://github.com/ceph/ceph/pull/17354
- Upstream Luminous PR: https://github.com/ceph/ceph/pull/34876/files
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list