[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)
Łukasz Zemczak
1946211 at bugs.launchpad.net
Thu Dec 16 10:20:30 UTC 2021
Hey! Sadly, it looks like the riscv64 binary FTBFS for the new version.
Since this arch is very flaky right now, I'll retry the build and see if
it succeeds. If it doesn't, someone will have to take a look at it
before we can release the package to focal-updates.
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1946211
Title:
[SRU] "radosgw-admin bucket limit check" has duplicate entries if
bucket count exceeds 1000 (max_entries)
Status in Ubuntu Cloud Archive:
Fix Released
Status in Ubuntu Cloud Archive ussuri series:
New
Status in ceph package in Ubuntu:
Fix Released
Status in ceph source package in Focal:
Fix Committed
Bug description:
The "radosgw-admin bucket limit check" command has a bug in octopus.
Since we do not clear the bucket list in RGWRadosUser::list_buckets()
before asking for the next "max_entries", they are appended to the
existing list and we end up counting the first ones again. This causes
duplicated entries in the output of "ragodgw-admin bucket limit check"
This bug is triggered if bucket count exceeds 1000 (default
max_entries).
------
$ dpkg -l | grep ceph
ii ceph 15.2.12-0ubuntu0.20.04.1 amd64 distributed storage and file system
ii ceph-base 15.2.12-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and management tools
ii ceph-common 15.2.12-0ubuntu0.20.04.1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 15.2.12-0ubuntu0.20.04.1 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 15.2.12-0ubuntu0.20.04.1 amd64 manager for the ceph distributed file system
ii ceph-mgr-modules-core 15.2.12-0ubuntu0.20.04.1 all ceph manager modules which are always enabled
ii ceph-mon 15.2.12-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage system
ii ceph-osd 15.2.12-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage system
ii libcephfs2 15.2.12-0ubuntu0.20.04.1 amd64 Ceph distributed file system client library
ii python3-ceph-argparse 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 15.2.12-0ubuntu0.20.04.1 all Python 3 utility libraries for Ceph
ii python3-cephfs 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 libraries for the Ceph libcephfs library
$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
20572
$ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | select(.bucket=="bucket_1095")'
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
------------------------------------------------------------------------------
Fix proposed through https://github.com/ceph/ceph/pull/43381
diff --git a/src/rgw/rgw_sal.cc b/src/rgw/rgw_sal.cc
index 2b7a313ed91..65880a4757f 100644
--- a/src/rgw/rgw_sal.cc
+++ b/src/rgw/rgw_sal.cc
@@ -35,6 +35,7 @@ int RGWRadosUser::list_buckets(const string& marker, const string& end_marker,
RGWUserBuckets ulist;
bool is_truncated = false;
int ret;
+ buckets.clear();
ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, max,
need_stats, &ulist, &is_truncated);
------------------------------------------------------------------------------
tested and verified the fix works:
$ sudo dpkg -l | grep ceph
ii ceph 15.2.14-0ubuntu0.20.04.3 amd64 distributed storage and file system
ii ceph-base 15.2.14-0ubuntu0.20.04.3 amd64 common ceph daemon libraries and management tools
ii ceph-common 15.2.14-0ubuntu0.20.04.3 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 15.2.14-0ubuntu0.20.04.3 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 15.2.14-0ubuntu0.20.04.3 amd64 manager for the ceph distributed file system
ii ceph-mgr-modules-core 15.2.14-0ubuntu0.20.04.3 all ceph manager modules which are always enabled
ii ceph-mon 15.2.14-0ubuntu0.20.04.3 amd64 monitor server for the ceph storage system
ii ceph-osd 15.2.14-0ubuntu0.20.04.3 amd64 OSD server for the ceph storage system
ii libcephfs2 15.2.14-0ubuntu0.20.04.3 amd64 Ceph distributed file system client library
ii python3-ceph-argparse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility libraries for Ceph
ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 libraries for the Ceph libcephfs library
ubuntu at crush-ceph-rgw01:~$ sudo apt-cache policy ceph
ceph:
Installed: 15.2.14-0ubuntu0.20.04.3
Candidate: 15.2.14-0ubuntu0.20.04.3
$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | select(.bucket=="bucket_1095")'
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
----------
[Impact]
duplicated bucket name entries appear in the customers outputs when
they script the `radosgw-admin bucket limit check` commands.
To reproduce:
Create more than 1000 (default value of max_entries) buckets in a
cluster, and run 'radosgw-admin bucket limit check'
Duplicated entries are seen in the output on Octopus. For example,
$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
20572
[Test case]
Create more than 1000 buckets in a cluster, then run the 'radosgw-
admin bucket limit check' command. There should be no duplicated
entries in the output. Below is correct output, where the numbers
match.
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
5572
$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
[Where problems could occur]
The duplicate entries could end up causing admins or even scripts to
assume that there are more buckets than there really are.
[Other Info]
- The patch was provided by Nikhil Kshirsagar (attached here)
- Upstream tracker: https://tracker.ceph.com/issues/52813
- Upstream PR: https://github.com/ceph/ceph/pull/43381
- Patched into Octopus upstream release.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list