[Bug 1854129] [NEW] regression: recent eoan patch killed ceph osd pool create

Harry Coin hgcoin at gmail.com
Wed Nov 27 05:30:48 UTC 2019


Public bug reported:

After applying recent eaon normal upgrades to an otherwise vanilla
system (4 osd hosts x 6 rotating disks/host, usual mons, mgrs, mds): The
first time after a completely cold ceph cluster start (waiting for
health ok, idle otherwise): The command to create a ceph pool hangs, but
creates the pool.   The second create pool attempt hangs forever, but
does not create the pool.   I'm betting it has to do with the python3.7
patch just shipped, but that's just a guess I haven't tried to create a
pool in a while.

root at sysmon1:/etc/ceph# ceph --verbose osd pool create qfblockdevsnoc2 32            
parsed_args: Namespace(admin_socket=None, block=False, cephconf=None, client_id=None, client_name=None, cluster=None, cluster_timeout=None, completion=False, help=False, input_file=None, output_file=None, output_format=None, period=1, setgroup=None, setuser=None, status=False, verbose=True, version=False, watch=False, watch_channel='cluster', watch_debug=False, watch_error=False, watch_info=False, watch_sec=False, watch_warn=False), childargs: ['osd', 'pool', 'create', 'qfblockdevsnoc2', '32']
cmd000: pg stat
cmd001: pg getmap
cmd002: pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
cmd003: pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}
cmd004: pg dump_pools_json
cmd005: pg ls-by-pool <poolstr> {<states> [<states>...]}
cmd006: pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
cmd007: pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
cmd008: pg ls {<int>} {<states> [<states>...]}
cmd009: pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
cmd010: pg debug unfound_objects_exist|degraded_pgs_exist
cmd011: pg scrub <pgid>
cmd012: pg deep-scrub <pgid>
cmd013: pg repair <pgid>
cmd014: pg force-recovery <pgid> [<pgid>...]
cmd015: pg force-backfill <pgid> [<pgid>...]
cmd016: pg cancel-force-recovery <pgid> [<pgid>...]
cmd017: pg cancel-force-backfill <pgid> [<pgid>...]
cmd018: osd perf
cmd019: osd df {plain|tree} {class|name} {<filter>}
cmd020: osd blocked-by
cmd021: osd pool stats {<poolname>}
cmd022: osd pool scrub <poolname> [<poolname>...]
cmd023: osd pool deep-scrub <poolname> [<poolname>...]
cmd024: osd pool repair <poolname> [<poolname>...]
cmd025: osd pool force-recovery <poolname> [<poolname>...]
cmd026: osd pool force-backfill <poolname> [<poolname>...]
cmd027: osd pool cancel-force-recovery <poolname> [<poolname>...]
cmd028: osd pool cancel-force-backfill <poolname> [<poolname>...]
cmd029: osd reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}
cmd030: osd test-reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}
cmd031: osd reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}
cmd032: osd test-reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}
cmd033: osd destroy <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}
cmd034: osd purge <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}
cmd035: osd safe-to-destroy <ids> [<ids>...]
cmd036: osd ok-to-stop <ids> [<ids>...]
cmd037: osd scrub <who>
cmd038: osd deep-scrub <who>
cmd039: osd repair <who>
cmd040: service dump
cmd041: service status
cmd042: config show <who> {<key>}
cmd043: config show-with-defaults <who>
cmd044: device ls
cmd045: device info <devid>
cmd046: device ls-by-daemon <who>
cmd047: device ls-by-host <host>
cmd048: device set-life-expectancy <devid> <from> {<to>}
cmd049: device rm-life-expectancy <devid>
cmd050: balancer status
cmd051: balancer mode none|crush-compat|upmap
cmd052: balancer on
cmd053: balancer off
cmd054: balancer pool ls
cmd055: balancer pool add <pools> [<pools>...]
cmd056: balancer pool rm <pools> [<pools>...]
cmd057: balancer eval {<option>}
cmd058: balancer eval-verbose {<option>}
cmd059: balancer optimize <plan> {<pools> [<pools>...]}
cmd060: balancer show <plan>
cmd061: balancer rm <plan>
cmd062: balancer reset
cmd063: balancer dump <plan>
cmd064: balancer ls
cmd065: balancer execute <plan>
cmd066: crash info <id>
cmd067: crash ls
cmd068: crash post
cmd069: crash prune <keep>
cmd070: crash rm <id>
cmd071: crash stat
cmd072: crash json_report <hours>
cmd073: dashboard set-jwt-token-ttl <int>
cmd074: dashboard get-jwt-token-ttl
cmd075: dashboard create-self-signed-cert
cmd076: dashboard grafana dashboards update
cmd077: dashboard get-alertmanager-api-host
cmd078: dashboard set-alertmanager-api-host <value>
cmd079: dashboard reset-alertmanager-api-host
cmd080: dashboard get-audit-api-enabled
cmd081: dashboard set-audit-api-enabled <value>
cmd082: dashboard reset-audit-api-enabled
cmd083: dashboard get-audit-api-log-payload
cmd084: dashboard set-audit-api-log-payload <value>
cmd085: dashboard reset-audit-api-log-payload
cmd086: dashboard get-enable-browsable-api
cmd087: dashboard set-enable-browsable-api <value>
cmd088: dashboard reset-enable-browsable-api
cmd089: dashboard get-ganesha-clusters-rados-pool-namespace
cmd090: dashboard set-ganesha-clusters-rados-pool-namespace <value>
cmd091: dashboard reset-ganesha-clusters-rados-pool-namespace
cmd092: dashboard get-grafana-api-password
cmd093: dashboard set-grafana-api-password <value>
cmd094: dashboard reset-grafana-api-password
cmd095: dashboard get-grafana-api-url
cmd096: dashboard set-grafana-api-url <value>
cmd097: dashboard reset-grafana-api-url
cmd098: dashboard get-grafana-api-username
cmd099: dashboard set-grafana-api-username <value>
cmd100: dashboard reset-grafana-api-username
cmd101: dashboard get-grafana-update-dashboards
cmd102: dashboard set-grafana-update-dashboards <value>
cmd103: dashboard reset-grafana-update-dashboards
cmd104: dashboard get-iscsi-api-ssl-verification
cmd105: dashboard set-iscsi-api-ssl-verification <value>
cmd106: dashboard reset-iscsi-api-ssl-verification
cmd107: dashboard get-prometheus-api-host
cmd108: dashboard set-prometheus-api-host <value>
cmd109: dashboard reset-prometheus-api-host
cmd110: dashboard get-rest-requests-timeout
cmd111: dashboard set-rest-requests-timeout <int>
cmd112: dashboard reset-rest-requests-timeout
cmd113: dashboard get-rgw-api-access-key
cmd114: dashboard set-rgw-api-access-key <value>
cmd115: dashboard reset-rgw-api-access-key
cmd116: dashboard get-rgw-api-admin-resource
cmd117: dashboard set-rgw-api-admin-resource <value>
cmd118: dashboard reset-rgw-api-admin-resource
cmd119: dashboard get-rgw-api-host
cmd120: dashboard set-rgw-api-host <value>
cmd121: dashboard reset-rgw-api-host
cmd122: dashboard get-rgw-api-port
cmd123: dashboard set-rgw-api-port <int>
cmd124: dashboard reset-rgw-api-port
cmd125: dashboard get-rgw-api-scheme
cmd126: dashboard set-rgw-api-scheme <value>
cmd127: dashboard reset-rgw-api-scheme
cmd128: dashboard get-rgw-api-secret-key
cmd129: dashboard set-rgw-api-secret-key <value>
cmd130: dashboard reset-rgw-api-secret-key
cmd131: dashboard get-rgw-api-ssl-verify
cmd132: dashboard set-rgw-api-ssl-verify <value>
cmd133: dashboard reset-rgw-api-ssl-verify
cmd134: dashboard get-rgw-api-user-id
cmd135: dashboard set-rgw-api-user-id <value>
cmd136: dashboard reset-rgw-api-user-id
cmd137: dashboard sso enable saml2
cmd138: dashboard sso disable
cmd139: dashboard sso status
cmd140: dashboard sso show saml2
cmd141: dashboard sso setup saml2 <ceph_dashboard_base_url> <idp_metadata> {<idp_username_attribute>} {<idp_entity_id>} {<sp_x_509_cert>} {<sp_private_key>}
cmd142: dashboard set-login-credentials <username> <password>
cmd143: dashboard ac-role-show {<rolename>}
cmd144: dashboard ac-role-create <rolename> {<description>}
cmd145: dashboard ac-role-delete <rolename>
cmd146: dashboard ac-role-add-scope-perms <rolename> <scopename> <permissions> [<permissions>...]
cmd147: dashboard ac-role-del-scope-perms <rolename> <scopename>
cmd148: dashboard ac-user-show {<username>}
cmd149: dashboard ac-user-create <username> {<password>} {<rolename>} {<name>} {<email>}
cmd150: dashboard ac-user-delete <username>
cmd151: dashboard ac-user-set-roles <username> <roles> [<roles>...]
cmd152: dashboard ac-user-add-roles <username> <roles> [<roles>...]
cmd153: dashboard ac-user-del-roles <username> <roles> [<roles>...]
cmd154: dashboard ac-user-set-password <username> <password>
cmd155: dashboard ac-user-set-info <username> <name> <email>
cmd156: dashboard iscsi-gateway-list
cmd157: dashboard iscsi-gateway-add <service_url>
cmd158: dashboard iscsi-gateway-rm <name>
cmd159: dashboard feature enable|disable|status {rbd|mirroring|iscsi|cephfs|rgw [rbd|mirroring|iscsi|cephfs|rgw...]}
cmd160: deepsea config-set <key> <value>
cmd161: deepsea config-show
cmd162: device query-daemon-health-metrics <who>
cmd163: device scrape-daemon-health-metrics <who>
cmd164: device scrape-health-metrics {<devid>}
cmd165: device get-health-metrics <devid> {<sample>}
cmd166: device check-health
cmd167: device monitoring on
cmd168: device monitoring off
cmd169: device predict-life-expectancy <devid>
cmd170: device show-prediction-config
cmd171: device set-cloud-prediction-config <server> <user> <password> <certfile> {<port>}
cmd172: device debug metrics-forced
cmd173: device debug smart-forced
cmd174: diskprediction_cloud status
cmd175: influx config-set <key> <value>
cmd176: influx config-show
cmd177: influx send
cmd178: insights
cmd179: insights prune-health <hours>
cmd180: iostat
cmd181: orchestrator host add <host>
cmd182: orchestrator host rm <host>
cmd183: orchestrator host ls
cmd184: orchestrator device ls {<host> [<host>...]} {json|plain} {--refresh}
cmd185: orchestrator service ls {<host>} {mon|mgr|osd|mds|nfs|rgw|rbd-mirror} {<svc_id>} {json|plain}
cmd186: orchestrator osd create {<svc_arg>}
cmd187: orchestrator osd rm <svc_id> [<svc_id>...]
cmd188: orchestrator mds add <svc_arg>
cmd189: orchestrator rgw add <svc_arg>
cmd190: orchestrator nfs add <svc_arg> <pool> {<namespace>}
cmd191: orchestrator mds rm <svc_id>
cmd192: orchestrator rgw rm <svc_id>
cmd193: orchestrator nfs rm <svc_id>
cmd194: orchestrator nfs update <svc_id> <int>
cmd195: orchestrator service start|stop|reload <svc_type> <svc_name>
cmd196: orchestrator service-instance start|stop|reload <svc_type> <svc_id>
cmd197: orchestrator mgr update <int> {<hosts> [<hosts>...]}
cmd198: orchestrator mon update <int> {<hosts> [<hosts>...]}
cmd199: orchestrator set backend <module_name>
cmd200: orchestrator status
cmd201: osd pool autoscale-status
cmd202: progress
cmd203: progress json
cmd204: progress clear
cmd205: prometheus file_sd_config
cmd206: rbd perf image stats {<pool_spec>} {write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency}
cmd207: rbd perf image counters {<pool_spec>} {write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency}
cmd208: restful create-key <key_name>
cmd209: restful delete-key <key_name>
cmd210: restful list-keys
cmd211: restful create-self-signed-cert
cmd212: restful restart
cmd213: mgr self-test run
cmd214: mgr self-test background start <workload>
cmd215: mgr self-test background stop
cmd216: mgr self-test config get <key>
cmd217: mgr self-test config get_localized <key>
cmd218: mgr self-test remote
cmd219: mgr self-test module <module>
cmd220: mgr self-test health set <checks>
cmd221: mgr self-test health clear {<checks> [<checks>...]}
cmd222: mgr self-test insights_set_now_offset <hours>
cmd223: mgr self-test cluster-log <channel> <priority> <message>
cmd224: ssh set-ssh-config
cmd225: ssh clear-ssh-config
cmd226: fs status {<fs>}
cmd227: osd status {<bucket>}
cmd228: telegraf config-set <key> <value>
cmd229: telegraf config-show
cmd230: telegraf send
cmd231: telemetry status
cmd232: telemetry send
cmd233: telemetry show
cmd234: telemetry on
cmd235: telemetry off
cmd236: fs volume ls
cmd237: fs volume create <name> {<size>}
cmd238: fs volume rm <vol_name>
cmd239: fs subvolumegroup create <vol_name> <group_name> {<pool_layout>} {<mode>}
cmd240: fs subvolumegroup rm <vol_name> <group_name> {--force}
cmd241: fs subvolume create <vol_name> <sub_name> {<int>} {<group_name>} {<pool_layout>} {<mode>}
cmd242: fs subvolume rm <vol_name> <sub_name> {<group_name>} {--force}
cmd243: fs subvolume getpath <vol_name> <sub_name> {<group_name>}
cmd244: fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name>
cmd245: fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> {--force}
cmd246: fs subvolume snapshot create <vol_name> <sub_name> <snap_name> {<group_name>}
cmd247: fs subvolume snapshot rm <vol_name> <sub_name> <snap_name> {<group_name>} {--force}
cmd248: zabbix config-set <key> <value>
cmd249: zabbix config-show
cmd250: zabbix send
cmd251: pg map <pgid>
cmd252: pg repeer <pgid>
cmd253: osd last-stat-seq <osdname (id|osd.id)>
cmd254: auth export {<entity>}
cmd255: auth get <entity>
cmd256: auth get-key <entity>
cmd257: auth print-key <entity>
cmd258: auth print_key <entity>
cmd259: auth list
cmd260: auth ls
cmd261: auth import
cmd262: auth add <entity> {<caps> [<caps>...]}
cmd263: auth get-or-create-key <entity> {<caps> [<caps>...]}
cmd264: auth get-or-create <entity> {<caps> [<caps>...]}
cmd265: fs authorize <filesystem> <entity> <caps> [<caps>...]
cmd266: auth caps <entity> <caps> [<caps>...]
cmd267: auth del <entity>
cmd268: auth rm <entity>
cmd269: compact
cmd270: scrub
cmd271: fsid
cmd272: log <logtext> [<logtext>...]
cmd273: log last {<int[1-]>} {debug|info|sec|warn|error} {*|cluster|audit}
cmd274: injectargs <injected_args> [<injected_args>...]
cmd275: status
cmd276: health {detail}
cmd277: time-sync-status
cmd278: df {detail}
cmd279: report {<tags> [<tags>...]}
cmd280: features
cmd281: quorum_status
cmd282: mon ok-to-stop <ids> [<ids>...]
cmd283: mon ok-to-add-offline
cmd284: mon ok-to-rm <id>
cmd285: mon_status
cmd286: sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
cmd287: heap dump|start_profiler|stop_profiler|release|stats
cmd288: quorum enter|exit
cmd289: tell <name (type.id)> <args> [<args>...]
cmd290: version
cmd291: node ls {all|osd|mon|mds|mgr}
cmd292: mon compact
cmd293: mon scrub
cmd294: mon sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
cmd295: mon metadata {<id>}
cmd296: mon count-metadata <property>
cmd297: mon versions
cmd298: versions
cmd299: mds stat
cmd300: mds dump {<int[0-]>}
cmd301: fs dump {<int[0-]>}
cmd302: mds getmap {<int[0-]>}
cmd303: mds metadata {<who>}
cmd304: mds count-metadata <property>
cmd305: mds versions
cmd306: mds tell <who> <args> [<args>...]
cmd307: mds compat show
cmd308: mds stop <role>
cmd309: mds deactivate <role>
cmd310: mds ok-to-stop <ids> [<ids>...]
cmd311: mds set_max_mds <int[0-]>
cmd312: mds set max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags <val> {--yes-i-really-mean-it}
cmd313: mds freeze <role_or_gid> <val>
cmd314: mds set_state <int[0-]> <int[0-20]>
cmd315: mds fail <role_or_gid>
cmd316: mds repaired <role>
cmd317: mds rm <int[0-]>
cmd318: mds rmfailed <role> {--yes-i-really-mean-it}
cmd319: mds cluster_down
cmd320: mds cluster_up
cmd321: mds compat rm_compat <int[0-]>
cmd322: mds compat rm_incompat <int[0-]>
cmd323: mds add_data_pool <pool>
cmd324: mds rm_data_pool <pool>
cmd325: mds remove_data_pool <pool>
cmd326: mds newfs <int[0-]> <int[0-]> {--yes-i-really-mean-it}
cmd327: fs new <fs_name> <metadata> <data> {--force} {--allow-dangerous-metadata-overlay}
cmd328: fs fail <fs_name>
cmd329: fs rm <fs_name> {--yes-i-really-mean-it}
cmd330: fs reset <fs_name> {--yes-i-really-mean-it}
cmd331: fs ls
cmd332: fs get <fs_name>
cmd333: fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client <val> {--yes-i-really-mean-it}
cmd334: fs flag set enable_multiple <val> {--yes-i-really-mean-it}
cmd335: fs add_data_pool <fs_name> <pool>
cmd336: fs rm_data_pool <fs_name> <pool>
cmd337: fs set_default <fs_name>
cmd338: fs set-default <fs_name>
cmd339: mon dump {<int[0-]>}
cmd340: mon stat
cmd341: mon getmap {<int[0-]>}
cmd342: mon add <name> <IPaddr[:port]>
cmd343: mon rm <name>
cmd344: mon remove <name>
cmd345: mon feature ls {--with-value}
cmd346: mon feature set <feature_name> {--yes-i-really-mean-it}
cmd347: mon set-rank <name> <int>
cmd348: mon set-addrs <name> <addrs>
cmd349: mon enable-msgr2
cmd350: osd stat
cmd351: osd dump {<int[0-]>}
cmd352: osd tree {<int[0-]>} {up|down|in|out|destroyed [up|down|in|out|destroyed...]}
cmd353: osd tree-from {<int[0-]>} <bucket> {up|down|in|out|destroyed [up|down|in|out|destroyed...]}
cmd354: osd ls {<int[0-]>}
cmd355: osd getmap {<int[0-]>}
cmd356: osd getcrushmap {<int[0-]>}
cmd357: osd getmaxosd
cmd358: osd ls-tree {<int[0-]>} <name>
cmd359: osd find <osdname (id|osd.id)>
cmd360: osd metadata {<osdname (id|osd.id)>}
cmd361: osd count-metadata <property>
cmd362: osd versions
cmd363: osd numa-status
cmd364: osd map <poolname> <objectname> {<nspace>}
cmd365: osd lspools
cmd366: osd crush rule list
cmd367: osd crush rule ls
cmd368: osd crush rule ls-by-class <class>
cmd369: osd crush rule dump {<name>}
cmd370: osd crush dump
cmd371: osd setcrushmap {<int>}
cmd372: osd crush set {<int>}
cmd373: osd crush add-bucket <name> <type> {<args> [<args>...]}
cmd374: osd crush rename-bucket <srcname> <dstname>
cmd375: osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
cmd376: osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
cmd377: osd crush set-all-straw-buckets-to-straw2
cmd378: osd crush class create <class>
cmd379: osd crush class rm <class>
cmd380: osd crush set-device-class <class> <ids> [<ids>...]
cmd381: osd crush rm-device-class <ids> [<ids>...]
cmd382: osd crush class rename <srcname> <dstname>
cmd383: osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
cmd384: osd crush move <name> <args> [<args>...]
cmd385: osd crush swap-bucket <source> <dest> {--yes-i-really-mean-it}
cmd386: osd crush link <name> <args> [<args>...]
cmd387: osd crush rm <name> {<ancestor>}
cmd388: osd crush remove <name> {<ancestor>}
cmd389: osd crush unlink <name> {<ancestor>}
cmd390: osd crush reweight-all
cmd391: osd crush reweight <name> <float[0.0-]>
cmd392: osd crush reweight-subtree <name> <float[0.0-]>
cmd393: osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default
cmd394: osd crush set-tunable straw_calc_version <int>
cmd395: osd crush get-tunable straw_calc_version
cmd396: osd crush show-tunables
cmd397: osd crush rule create-simple <name> <root> <type> {firstn|indep}
cmd398: osd crush rule create-replicated <name> <root> <type> {<class>}
cmd399: osd crush rule create-erasure <name> {<profile>}
cmd400: osd crush rule rm <name>
cmd401: osd crush rule rename <srcname> <dstname>
cmd402: osd crush tree {--show-shadow}
cmd403: osd crush ls <node>
cmd404: osd crush class ls
cmd405: osd crush class ls-osd <class>
cmd406: osd crush get-device-class <ids> [<ids>...]
cmd407: osd crush weight-set ls
cmd408: osd crush weight-set dump
cmd409: osd crush weight-set create-compat
cmd410: osd crush weight-set create <poolname> flat|positional
cmd411: osd crush weight-set rm <poolname>
cmd412: osd crush weight-set rm-compat
cmd413: osd crush weight-set reweight <poolname> <item> <float[0.0-]> [<float[0.0-]>...]
cmd414: osd crush weight-set reweight-compat <item> <float[0.0-]> [<float[0.0-]>...]
cmd415: osd setmaxosd <int[0-]>
cmd416: osd set-full-ratio <float[0.0-1.0]>
cmd417: osd set-backfillfull-ratio <float[0.0-1.0]>
cmd418: osd set-nearfull-ratio <float[0.0-1.0]>
cmd419: osd get-require-min-compat-client
cmd420: osd set-require-min-compat-client <version> {--yes-i-really-mean-it}
cmd421: osd pause
cmd422: osd unpause
cmd423: osd erasure-code-profile set <name> {<profile> [<profile>...]} {--force}
cmd424: osd erasure-code-profile get <name>
cmd425: osd erasure-code-profile rm <name>
cmd426: osd erasure-code-profile ls
cmd427: osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_hardlimit {--yes-i-really-mean-it}
cmd428: osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
cmd429: osd require-osd-release luminous|mimic|nautilus {--yes-i-really-mean-it}
cmd430: osd down <ids> [<ids>...]
cmd431: osd out <ids> [<ids>...]
cmd432: osd in <ids> [<ids>...]
cmd433: osd rm <ids> [<ids>...]
cmd434: osd add-noup <ids> [<ids>...]
cmd435: osd add-nodown <ids> [<ids>...]
cmd436: osd add-noin <ids> [<ids>...]
cmd437: osd add-noout <ids> [<ids>...]
cmd438: osd rm-noup <ids> [<ids>...]
cmd439: osd rm-nodown <ids> [<ids>...]
cmd440: osd rm-noin <ids> [<ids>...]
cmd441: osd rm-noout <ids> [<ids>...]
cmd442: osd set-group <flags> <who> [<who>...]
cmd443: osd unset-group <flags> <who> [<who>...]
cmd444: osd reweight <osdname (id|osd.id)> <float[0.0-1.0]>
cmd445: osd reweightn <weights>
cmd446: osd force-create-pg <pgid> {--yes-i-really-mean-it}
cmd447: osd pg-temp <pgid> {<osdname (id|osd.id)> [<osdname (id|osd.id)>...]}
cmd448: osd pg-upmap <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]
cmd449: osd rm-pg-upmap <pgid>
cmd450: osd pg-upmap-items <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]
cmd451: osd rm-pg-upmap-items <pgid>
cmd452: osd primary-temp <pgid> <osdname (id|osd.id)>
cmd453: osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
cmd454: osd destroy-actual <osdname (id|osd.id)> {--yes-i-really-mean-it}
cmd455: osd purge-new <osdname (id|osd.id)> {--yes-i-really-mean-it}
cmd456: osd purge-actual <osdname (id|osd.id)> {--yes-i-really-mean-it}
cmd457: osd lost <osdname (id|osd.id)> {--yes-i-really-mean-it}
cmd458: osd create {<uuid>} {<osdname (id|osd.id)>}
cmd459: osd new <uuid> {<osdname (id|osd.id)>}
cmd460: osd blacklist add|rm <EntityAddr> {<float[0.0-]>}
cmd461: osd blacklist ls
cmd462: osd blacklist clear
cmd463: osd pool mksnap <poolname> <snap>
cmd464: osd pool rmsnap <poolname> <snap>
cmd465: osd pool ls {detail}
cmd466: osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>} {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}
cmd467: osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} {--yes-i-really-really-mean-it-not-faking}
cmd468: osd pool rm <poolname> {<poolname>} {--yes-i-really-really-mean-it} {--yes-i-really-really-mean-it-not-faking}
cmd469: osd pool rename <poolname> <poolname>
cmd470: osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ratio
cmd471: osd pool set <poolname> size|min_size|pg_num|pgp_num|pgp_num_actual|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ratio <val> {--yes-i-really-mean-it}
cmd472: osd pool set-quota <poolname> max_objects|max_bytes <val>
cmd473: osd pool get-quota <poolname>
cmd474: osd pool application enable <poolname> <app> {--yes-i-really-mean-it}
cmd475: osd pool application disable <poolname> <app> {--yes-i-really-mean-it}
cmd476: osd pool application set <poolname> <app> <key> <value>
cmd477: osd pool application rm <poolname> <app> <key>
cmd478: osd pool application get {<poolname>} {<app>} {<key>}
cmd479: osd utilization
cmd480: osd tier add <poolname> <poolname> {--force-nonempty}
cmd481: osd tier rm <poolname> <poolname>
cmd482: osd tier remove <poolname> <poolname>
cmd483: osd tier cache-mode <poolname> none|writeback|forward|readonly|readforward|proxy|readproxy {--yes-i-really-mean-it}
cmd484: osd tier set-overlay <poolname> <poolname>
cmd485: osd tier rm-overlay <poolname>
cmd486: osd tier remove-overlay <poolname>
cmd487: osd tier add-cache <poolname> <poolname> <int[0-]>
cmd488: config-key get <key>
cmd489: config-key set <key> {<val>}
cmd490: config-key put <key> {<val>}
cmd491: config-key del <key>
cmd492: config-key rm <key>
cmd493: config-key exists <key>
cmd494: config-key list
cmd495: config-key ls
cmd496: config-key dump {<key>}
cmd497: mgr dump {<int[0-]>}
cmd498: mgr fail <who>
cmd499: mgr module ls
cmd500: mgr services
cmd501: mgr module enable <module> {--force}
cmd502: mgr module disable <module>
cmd503: mgr metadata {<who>}
cmd504: mgr count-metadata <property>
cmd505: mgr versions
cmd506: config set <who> <name> <value> {--force}
cmd507: config rm <who> <name>
cmd508: config get <who> {<key>}
cmd509: config dump
cmd510: config help <key>
cmd511: config ls
cmd512: config assimilate-conf
cmd513: config log {<int>}
cmd514: config reset <int>
cmd515: config generate-minimal-conf
cmd516: smart {<devid>}
validate_command: osd pool create qfblockdevsnoc2 32
better match: 0.5 > 0: pg stat 
better match: 0.5 > 0.5: pg getmap
better match: 0.5 > 0.5: pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
better match: 0.5 > 0.5: pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}
better match: 0.5 > 0.5: pg dump_pools_json
better match: 0.5 > 0.5: pg ls-by-pool <poolstr> {<states> [<states>...]}
better match: 0.5 > 0.5: pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
better match: 0.5 > 0.5: pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
better match: 0.5 > 0.5: pg ls {<int>} {<states> [<states>...]}
better match: 0.5 > 0.5: pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
better match: 0.5 > 0.5: pg debug unfound_objects_exist|degraded_pgs_exist
better match: 0.5 > 0.5: pg scrub <pgid>
better match: 0.5 > 0.5: pg deep-scrub <pgid>
better match: 0.5 > 0.5: pg repair <pgid>
better match: 0.5 > 0.5: pg force-recovery <pgid> [<pgid>...]
better match: 0.5 > 0.5: pg force-backfill <pgid> [<pgid>...]
better match: 0.5 > 0.5: pg cancel-force-recovery <pgid> [<pgid>...]
better match: 0.5 > 0.5: pg cancel-force-backfill <pgid> [<pgid>...]
better match: 1.5 > 0.5: osd perf
better match: 1.5 > 1.5: osd df {plain|tree} {class|name} {<filter>}
better match: 1.5 > 1.5: osd blocked-by
better match: 2.5 > 1.5: osd pool stats {<poolname>}
better match: 2.5 > 2.5: osd pool scrub <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool deep-scrub <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool repair <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool force-recovery <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool force-backfill <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool cancel-force-recovery <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool cancel-force-backfill <poolname> [<poolname>...]
better match: 2.5 > 2.5: osd pool autoscale-status
better match: 2.5 > 2.5: osd pool mksnap <poolname> <snap>
better match: 2.5 > 2.5: osd pool rmsnap <poolname> <snap>
better match: 2.5 > 2.5: osd pool ls {detail}
better match: 5.5 > 2.5: osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>} {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}
bestcmds_sorted:
[{'flags': 0,
'help': 'create pool',
'module': 'osd',
'perm': 'rw',
'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=osd),
argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=pool),
argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=create),
argdesc(<class 'ceph_argparse.CephPoolname'>, req=True, name=pool, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=True, name=pg_num, n=1, numseen=0, range=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=pgp_num, n=1, numseen=0, range=0),
argdesc(<class 'ceph_argparse.CephChoices'>, req=False, name=pool_type, n=1, numseen=0, strings=replicated|erasure),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=erasure_code_profile, n=1, numseen=0, goodchars=[A-Za-z0-9-_.]),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=rule, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=expected_num_objects, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=size, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=pg_num_min, n=1, numseen=0, range=0),
argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=target_size_bytes, n=1, numseen=0, range=0),
argdesc(<class 'ceph_argparse.CephFloat'>, req=False, name=target_size_ratio, n=1, numseen=0, range=0|1)]}]
Submitting command:  {'prefix': 'osd pool create', 'pool': 'qfblockdevsnoc2', 'pg_num': 32}
... minutes pass here, with no output of any sort.---

tail of /var/log/ceph/ceph-mon.xxx.log:
2019-11-26 23:24:50.474 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 calling monitor election
2019-11-26 23:24:50.478 7f45164f9700  1 mon.nocsupport1 at 0(electing).elector(37321) init, last seen epoch 37321, mid-election, bumping
2019-11-26 23:24:50.654 7f45164f9700 -1 mon.nocsupport1 at 0(electing) e9 failed to get devid for : fallback method has serial ''but no model
2019-11-26 23:24:50.742 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 calling monitor election
2019-11-26 23:24:50.742 7f45164f9700  1 mon.nocsupport1 at 0(electing).elector(37325) init, last seen epoch 37325, mid-election, bumping
2019-11-26 23:24:50.906 7f45164f9700 -1 mon.nocsupport1 at 0(electing) e9 failed to get devid for : fallback method has serial ''but no model
2019-11-26 23:24:51.054 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 is new leader, mons nocsupport1,nocsupport2,nocsupport4,nocsupport3,sysmon1 in quorum (ranks 0,1,2,3,4)
2019-11-26 23:24:51.290 7f45164f9700  0 mon.nocsupport1 at 0(leader) e9 handle_command mon_command({"prefix": "osd pool create", "pool": "qfblockdevsnoc2", "pg_num": 32} v 0) v1
2019-11-26 23:24:51.290 7f45164f9700  0 log_channel(audit) log [INF] : from='client.? ' entity='' cmd=[{"prefix": "osd pool create", "pool": "qfblockdevsnoc2", "pg_num": 32}]: dispatch
2019-11-26 23:24:51.658 7f4512cf2700  0 log_channel(cluster) log [DBG] : monmap e9: 5 mons at {nocsupport1=[v2:[fc00:1002:c7::b1]:3300/0,v1:[fc00:1002:c7::b1]:6789/0],nocsupport2=[v2:[fc00:1002:c7::b2]:3300/0,v1:[fc00:1002:c7::b2]:6789/0],nocsupport3=[v2:[fc00:1002:c7::b3]:3300/0,v1:[fc00:1002:c7::b3]:6789/0],nocsupport4=[v2:[fc00:1002:c7::b4]:3300/0,v1:[fc00:1002:c7::b4]:6789/0],sysmon1=[v2:[fc00:1002:c7::152]:3300/0,v1:[fc00:1002:c7::152]:6789/0]}
2019-11-26 23:24:51.662 7f4512cf2700  0 log_channel(cluster) log [DBG] : fsmap qflibraryfs:1 {0=nocsupport3=up:active} 3 up:standby
2019-11-26 23:24:51.662 7f4512cf2700  0 log_channel(cluster) log [DBG] : osdmap e47268: 24 total, 24 up, 24 in
2019-11-26 23:24:51.682 7f4512cf2700  0 log_channel(cluster) log [DBG] : mgrmap e1379: nocsupport3(active, since 8h), standbys: nocsupport1, nocsupport4, nocsupport2
2019-11-26 23:24:51.686 7f4512cf2700  0 log_channel(cluster) log [WRN] : overall HEALTH_WARN Degraded data redundancy: 363722/20360571 objects degraded (1.786%), 83 pgs degraded, 83 pgs undersized

when I ^C after minutes of waiting:

^CInterrupted
Traceback (most recent call last):
File "/usr/bin/ceph", line 573, in do_command
argdict=valid_dict, inbuf=inbuf)
File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1459, in json_command
inbuf, timeout, verbose)
File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1329, in send_command_retry
return send_command(*args, **kwargs)
File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1389, in send_command
cluster.mon_command, cmd, inbuf, timeout=timeout)
File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1311, in run_in_thread
t.join(timeout=timeout)
File "/usr/lib/python3.7/threading.py", line 1048, in join
self._wait_for_tstate_lock(timeout=max(timeout, 0))
File "/usr/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/ceph", line 1263, in <module>
retval = main()
File "/usr/bin/ceph", line 1194, in main
verbose)
File "/usr/bin/ceph", line 619, in new_style_command
ret, outbuf, outs = do_command(parsed_args, target, cmdargs, sigdict, inbuf, verbose)
File "/usr/bin/ceph", line 593, in do_command
return ret, '', ''
UnboundLocalError: local variable 'ret' referenced before assignment
root at sysmon1:/etc/ceph#uname -a
Linux sysmon1.1.quietfountain.com 5.3.0-23-generic #25-Ubuntu SMP Tue Nov 12 09:22:33 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
root at sysmon1:/etc/ceph# apt list ceph-common
Listing... Done
ceph-common/eoan,now 14.2.2-0ubuntu3 amd64 [installed,automatic]

** Affects: ceph (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1854129

Title:
  regression: recent eoan patch killed ceph osd pool create

Status in ceph package in Ubuntu:
  New

Bug description:
  After applying recent eaon normal upgrades to an otherwise vanilla
  system (4 osd hosts x 6 rotating disks/host, usual mons, mgrs, mds):
  The first time after a completely cold ceph cluster start (waiting for
  health ok, idle otherwise): The command to create a ceph pool hangs,
  but creates the pool.   The second create pool attempt hangs forever,
  but does not create the pool.   I'm betting it has to do with the
  python3.7 patch just shipped, but that's just a guess I haven't tried
  to create a pool in a while.

  root at sysmon1:/etc/ceph# ceph --verbose osd pool create qfblockdevsnoc2 32            
  parsed_args: Namespace(admin_socket=None, block=False, cephconf=None, client_id=None, client_name=None, cluster=None, cluster_timeout=None, completion=False, help=False, input_file=None, output_file=None, output_format=None, period=1, setgroup=None, setuser=None, status=False, verbose=True, version=False, watch=False, watch_channel='cluster', watch_debug=False, watch_error=False, watch_info=False, watch_sec=False, watch_warn=False), childargs: ['osd', 'pool', 'create', 'qfblockdevsnoc2', '32']
  cmd000: pg stat
  cmd001: pg getmap
  cmd002: pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
  cmd003: pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}
  cmd004: pg dump_pools_json
  cmd005: pg ls-by-pool <poolstr> {<states> [<states>...]}
  cmd006: pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
  cmd007: pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
  cmd008: pg ls {<int>} {<states> [<states>...]}
  cmd009: pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
  cmd010: pg debug unfound_objects_exist|degraded_pgs_exist
  cmd011: pg scrub <pgid>
  cmd012: pg deep-scrub <pgid>
  cmd013: pg repair <pgid>
  cmd014: pg force-recovery <pgid> [<pgid>...]
  cmd015: pg force-backfill <pgid> [<pgid>...]
  cmd016: pg cancel-force-recovery <pgid> [<pgid>...]
  cmd017: pg cancel-force-backfill <pgid> [<pgid>...]
  cmd018: osd perf
  cmd019: osd df {plain|tree} {class|name} {<filter>}
  cmd020: osd blocked-by
  cmd021: osd pool stats {<poolname>}
  cmd022: osd pool scrub <poolname> [<poolname>...]
  cmd023: osd pool deep-scrub <poolname> [<poolname>...]
  cmd024: osd pool repair <poolname> [<poolname>...]
  cmd025: osd pool force-recovery <poolname> [<poolname>...]
  cmd026: osd pool force-backfill <poolname> [<poolname>...]
  cmd027: osd pool cancel-force-recovery <poolname> [<poolname>...]
  cmd028: osd pool cancel-force-backfill <poolname> [<poolname>...]
  cmd029: osd reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}
  cmd030: osd test-reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}
  cmd031: osd reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}
  cmd032: osd test-reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}
  cmd033: osd destroy <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}
  cmd034: osd purge <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}
  cmd035: osd safe-to-destroy <ids> [<ids>...]
  cmd036: osd ok-to-stop <ids> [<ids>...]
  cmd037: osd scrub <who>
  cmd038: osd deep-scrub <who>
  cmd039: osd repair <who>
  cmd040: service dump
  cmd041: service status
  cmd042: config show <who> {<key>}
  cmd043: config show-with-defaults <who>
  cmd044: device ls
  cmd045: device info <devid>
  cmd046: device ls-by-daemon <who>
  cmd047: device ls-by-host <host>
  cmd048: device set-life-expectancy <devid> <from> {<to>}
  cmd049: device rm-life-expectancy <devid>
  cmd050: balancer status
  cmd051: balancer mode none|crush-compat|upmap
  cmd052: balancer on
  cmd053: balancer off
  cmd054: balancer pool ls
  cmd055: balancer pool add <pools> [<pools>...]
  cmd056: balancer pool rm <pools> [<pools>...]
  cmd057: balancer eval {<option>}
  cmd058: balancer eval-verbose {<option>}
  cmd059: balancer optimize <plan> {<pools> [<pools>...]}
  cmd060: balancer show <plan>
  cmd061: balancer rm <plan>
  cmd062: balancer reset
  cmd063: balancer dump <plan>
  cmd064: balancer ls
  cmd065: balancer execute <plan>
  cmd066: crash info <id>
  cmd067: crash ls
  cmd068: crash post
  cmd069: crash prune <keep>
  cmd070: crash rm <id>
  cmd071: crash stat
  cmd072: crash json_report <hours>
  cmd073: dashboard set-jwt-token-ttl <int>
  cmd074: dashboard get-jwt-token-ttl
  cmd075: dashboard create-self-signed-cert
  cmd076: dashboard grafana dashboards update
  cmd077: dashboard get-alertmanager-api-host
  cmd078: dashboard set-alertmanager-api-host <value>
  cmd079: dashboard reset-alertmanager-api-host
  cmd080: dashboard get-audit-api-enabled
  cmd081: dashboard set-audit-api-enabled <value>
  cmd082: dashboard reset-audit-api-enabled
  cmd083: dashboard get-audit-api-log-payload
  cmd084: dashboard set-audit-api-log-payload <value>
  cmd085: dashboard reset-audit-api-log-payload
  cmd086: dashboard get-enable-browsable-api
  cmd087: dashboard set-enable-browsable-api <value>
  cmd088: dashboard reset-enable-browsable-api
  cmd089: dashboard get-ganesha-clusters-rados-pool-namespace
  cmd090: dashboard set-ganesha-clusters-rados-pool-namespace <value>
  cmd091: dashboard reset-ganesha-clusters-rados-pool-namespace
  cmd092: dashboard get-grafana-api-password
  cmd093: dashboard set-grafana-api-password <value>
  cmd094: dashboard reset-grafana-api-password
  cmd095: dashboard get-grafana-api-url
  cmd096: dashboard set-grafana-api-url <value>
  cmd097: dashboard reset-grafana-api-url
  cmd098: dashboard get-grafana-api-username
  cmd099: dashboard set-grafana-api-username <value>
  cmd100: dashboard reset-grafana-api-username
  cmd101: dashboard get-grafana-update-dashboards
  cmd102: dashboard set-grafana-update-dashboards <value>
  cmd103: dashboard reset-grafana-update-dashboards
  cmd104: dashboard get-iscsi-api-ssl-verification
  cmd105: dashboard set-iscsi-api-ssl-verification <value>
  cmd106: dashboard reset-iscsi-api-ssl-verification
  cmd107: dashboard get-prometheus-api-host
  cmd108: dashboard set-prometheus-api-host <value>
  cmd109: dashboard reset-prometheus-api-host
  cmd110: dashboard get-rest-requests-timeout
  cmd111: dashboard set-rest-requests-timeout <int>
  cmd112: dashboard reset-rest-requests-timeout
  cmd113: dashboard get-rgw-api-access-key
  cmd114: dashboard set-rgw-api-access-key <value>
  cmd115: dashboard reset-rgw-api-access-key
  cmd116: dashboard get-rgw-api-admin-resource
  cmd117: dashboard set-rgw-api-admin-resource <value>
  cmd118: dashboard reset-rgw-api-admin-resource
  cmd119: dashboard get-rgw-api-host
  cmd120: dashboard set-rgw-api-host <value>
  cmd121: dashboard reset-rgw-api-host
  cmd122: dashboard get-rgw-api-port
  cmd123: dashboard set-rgw-api-port <int>
  cmd124: dashboard reset-rgw-api-port
  cmd125: dashboard get-rgw-api-scheme
  cmd126: dashboard set-rgw-api-scheme <value>
  cmd127: dashboard reset-rgw-api-scheme
  cmd128: dashboard get-rgw-api-secret-key
  cmd129: dashboard set-rgw-api-secret-key <value>
  cmd130: dashboard reset-rgw-api-secret-key
  cmd131: dashboard get-rgw-api-ssl-verify
  cmd132: dashboard set-rgw-api-ssl-verify <value>
  cmd133: dashboard reset-rgw-api-ssl-verify
  cmd134: dashboard get-rgw-api-user-id
  cmd135: dashboard set-rgw-api-user-id <value>
  cmd136: dashboard reset-rgw-api-user-id
  cmd137: dashboard sso enable saml2
  cmd138: dashboard sso disable
  cmd139: dashboard sso status
  cmd140: dashboard sso show saml2
  cmd141: dashboard sso setup saml2 <ceph_dashboard_base_url> <idp_metadata> {<idp_username_attribute>} {<idp_entity_id>} {<sp_x_509_cert>} {<sp_private_key>}
  cmd142: dashboard set-login-credentials <username> <password>
  cmd143: dashboard ac-role-show {<rolename>}
  cmd144: dashboard ac-role-create <rolename> {<description>}
  cmd145: dashboard ac-role-delete <rolename>
  cmd146: dashboard ac-role-add-scope-perms <rolename> <scopename> <permissions> [<permissions>...]
  cmd147: dashboard ac-role-del-scope-perms <rolename> <scopename>
  cmd148: dashboard ac-user-show {<username>}
  cmd149: dashboard ac-user-create <username> {<password>} {<rolename>} {<name>} {<email>}
  cmd150: dashboard ac-user-delete <username>
  cmd151: dashboard ac-user-set-roles <username> <roles> [<roles>...]
  cmd152: dashboard ac-user-add-roles <username> <roles> [<roles>...]
  cmd153: dashboard ac-user-del-roles <username> <roles> [<roles>...]
  cmd154: dashboard ac-user-set-password <username> <password>
  cmd155: dashboard ac-user-set-info <username> <name> <email>
  cmd156: dashboard iscsi-gateway-list
  cmd157: dashboard iscsi-gateway-add <service_url>
  cmd158: dashboard iscsi-gateway-rm <name>
  cmd159: dashboard feature enable|disable|status {rbd|mirroring|iscsi|cephfs|rgw [rbd|mirroring|iscsi|cephfs|rgw...]}
  cmd160: deepsea config-set <key> <value>
  cmd161: deepsea config-show
  cmd162: device query-daemon-health-metrics <who>
  cmd163: device scrape-daemon-health-metrics <who>
  cmd164: device scrape-health-metrics {<devid>}
  cmd165: device get-health-metrics <devid> {<sample>}
  cmd166: device check-health
  cmd167: device monitoring on
  cmd168: device monitoring off
  cmd169: device predict-life-expectancy <devid>
  cmd170: device show-prediction-config
  cmd171: device set-cloud-prediction-config <server> <user> <password> <certfile> {<port>}
  cmd172: device debug metrics-forced
  cmd173: device debug smart-forced
  cmd174: diskprediction_cloud status
  cmd175: influx config-set <key> <value>
  cmd176: influx config-show
  cmd177: influx send
  cmd178: insights
  cmd179: insights prune-health <hours>
  cmd180: iostat
  cmd181: orchestrator host add <host>
  cmd182: orchestrator host rm <host>
  cmd183: orchestrator host ls
  cmd184: orchestrator device ls {<host> [<host>...]} {json|plain} {--refresh}
  cmd185: orchestrator service ls {<host>} {mon|mgr|osd|mds|nfs|rgw|rbd-mirror} {<svc_id>} {json|plain}
  cmd186: orchestrator osd create {<svc_arg>}
  cmd187: orchestrator osd rm <svc_id> [<svc_id>...]
  cmd188: orchestrator mds add <svc_arg>
  cmd189: orchestrator rgw add <svc_arg>
  cmd190: orchestrator nfs add <svc_arg> <pool> {<namespace>}
  cmd191: orchestrator mds rm <svc_id>
  cmd192: orchestrator rgw rm <svc_id>
  cmd193: orchestrator nfs rm <svc_id>
  cmd194: orchestrator nfs update <svc_id> <int>
  cmd195: orchestrator service start|stop|reload <svc_type> <svc_name>
  cmd196: orchestrator service-instance start|stop|reload <svc_type> <svc_id>
  cmd197: orchestrator mgr update <int> {<hosts> [<hosts>...]}
  cmd198: orchestrator mon update <int> {<hosts> [<hosts>...]}
  cmd199: orchestrator set backend <module_name>
  cmd200: orchestrator status
  cmd201: osd pool autoscale-status
  cmd202: progress
  cmd203: progress json
  cmd204: progress clear
  cmd205: prometheus file_sd_config
  cmd206: rbd perf image stats {<pool_spec>} {write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency}
  cmd207: rbd perf image counters {<pool_spec>} {write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency}
  cmd208: restful create-key <key_name>
  cmd209: restful delete-key <key_name>
  cmd210: restful list-keys
  cmd211: restful create-self-signed-cert
  cmd212: restful restart
  cmd213: mgr self-test run
  cmd214: mgr self-test background start <workload>
  cmd215: mgr self-test background stop
  cmd216: mgr self-test config get <key>
  cmd217: mgr self-test config get_localized <key>
  cmd218: mgr self-test remote
  cmd219: mgr self-test module <module>
  cmd220: mgr self-test health set <checks>
  cmd221: mgr self-test health clear {<checks> [<checks>...]}
  cmd222: mgr self-test insights_set_now_offset <hours>
  cmd223: mgr self-test cluster-log <channel> <priority> <message>
  cmd224: ssh set-ssh-config
  cmd225: ssh clear-ssh-config
  cmd226: fs status {<fs>}
  cmd227: osd status {<bucket>}
  cmd228: telegraf config-set <key> <value>
  cmd229: telegraf config-show
  cmd230: telegraf send
  cmd231: telemetry status
  cmd232: telemetry send
  cmd233: telemetry show
  cmd234: telemetry on
  cmd235: telemetry off
  cmd236: fs volume ls
  cmd237: fs volume create <name> {<size>}
  cmd238: fs volume rm <vol_name>
  cmd239: fs subvolumegroup create <vol_name> <group_name> {<pool_layout>} {<mode>}
  cmd240: fs subvolumegroup rm <vol_name> <group_name> {--force}
  cmd241: fs subvolume create <vol_name> <sub_name> {<int>} {<group_name>} {<pool_layout>} {<mode>}
  cmd242: fs subvolume rm <vol_name> <sub_name> {<group_name>} {--force}
  cmd243: fs subvolume getpath <vol_name> <sub_name> {<group_name>}
  cmd244: fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name>
  cmd245: fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> {--force}
  cmd246: fs subvolume snapshot create <vol_name> <sub_name> <snap_name> {<group_name>}
  cmd247: fs subvolume snapshot rm <vol_name> <sub_name> <snap_name> {<group_name>} {--force}
  cmd248: zabbix config-set <key> <value>
  cmd249: zabbix config-show
  cmd250: zabbix send
  cmd251: pg map <pgid>
  cmd252: pg repeer <pgid>
  cmd253: osd last-stat-seq <osdname (id|osd.id)>
  cmd254: auth export {<entity>}
  cmd255: auth get <entity>
  cmd256: auth get-key <entity>
  cmd257: auth print-key <entity>
  cmd258: auth print_key <entity>
  cmd259: auth list
  cmd260: auth ls
  cmd261: auth import
  cmd262: auth add <entity> {<caps> [<caps>...]}
  cmd263: auth get-or-create-key <entity> {<caps> [<caps>...]}
  cmd264: auth get-or-create <entity> {<caps> [<caps>...]}
  cmd265: fs authorize <filesystem> <entity> <caps> [<caps>...]
  cmd266: auth caps <entity> <caps> [<caps>...]
  cmd267: auth del <entity>
  cmd268: auth rm <entity>
  cmd269: compact
  cmd270: scrub
  cmd271: fsid
  cmd272: log <logtext> [<logtext>...]
  cmd273: log last {<int[1-]>} {debug|info|sec|warn|error} {*|cluster|audit}
  cmd274: injectargs <injected_args> [<injected_args>...]
  cmd275: status
  cmd276: health {detail}
  cmd277: time-sync-status
  cmd278: df {detail}
  cmd279: report {<tags> [<tags>...]}
  cmd280: features
  cmd281: quorum_status
  cmd282: mon ok-to-stop <ids> [<ids>...]
  cmd283: mon ok-to-add-offline
  cmd284: mon ok-to-rm <id>
  cmd285: mon_status
  cmd286: sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
  cmd287: heap dump|start_profiler|stop_profiler|release|stats
  cmd288: quorum enter|exit
  cmd289: tell <name (type.id)> <args> [<args>...]
  cmd290: version
  cmd291: node ls {all|osd|mon|mds|mgr}
  cmd292: mon compact
  cmd293: mon scrub
  cmd294: mon sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
  cmd295: mon metadata {<id>}
  cmd296: mon count-metadata <property>
  cmd297: mon versions
  cmd298: versions
  cmd299: mds stat
  cmd300: mds dump {<int[0-]>}
  cmd301: fs dump {<int[0-]>}
  cmd302: mds getmap {<int[0-]>}
  cmd303: mds metadata {<who>}
  cmd304: mds count-metadata <property>
  cmd305: mds versions
  cmd306: mds tell <who> <args> [<args>...]
  cmd307: mds compat show
  cmd308: mds stop <role>
  cmd309: mds deactivate <role>
  cmd310: mds ok-to-stop <ids> [<ids>...]
  cmd311: mds set_max_mds <int[0-]>
  cmd312: mds set max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags <val> {--yes-i-really-mean-it}
  cmd313: mds freeze <role_or_gid> <val>
  cmd314: mds set_state <int[0-]> <int[0-20]>
  cmd315: mds fail <role_or_gid>
  cmd316: mds repaired <role>
  cmd317: mds rm <int[0-]>
  cmd318: mds rmfailed <role> {--yes-i-really-mean-it}
  cmd319: mds cluster_down
  cmd320: mds cluster_up
  cmd321: mds compat rm_compat <int[0-]>
  cmd322: mds compat rm_incompat <int[0-]>
  cmd323: mds add_data_pool <pool>
  cmd324: mds rm_data_pool <pool>
  cmd325: mds remove_data_pool <pool>
  cmd326: mds newfs <int[0-]> <int[0-]> {--yes-i-really-mean-it}
  cmd327: fs new <fs_name> <metadata> <data> {--force} {--allow-dangerous-metadata-overlay}
  cmd328: fs fail <fs_name>
  cmd329: fs rm <fs_name> {--yes-i-really-mean-it}
  cmd330: fs reset <fs_name> {--yes-i-really-mean-it}
  cmd331: fs ls
  cmd332: fs get <fs_name>
  cmd333: fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client <val> {--yes-i-really-mean-it}
  cmd334: fs flag set enable_multiple <val> {--yes-i-really-mean-it}
  cmd335: fs add_data_pool <fs_name> <pool>
  cmd336: fs rm_data_pool <fs_name> <pool>
  cmd337: fs set_default <fs_name>
  cmd338: fs set-default <fs_name>
  cmd339: mon dump {<int[0-]>}
  cmd340: mon stat
  cmd341: mon getmap {<int[0-]>}
  cmd342: mon add <name> <IPaddr[:port]>
  cmd343: mon rm <name>
  cmd344: mon remove <name>
  cmd345: mon feature ls {--with-value}
  cmd346: mon feature set <feature_name> {--yes-i-really-mean-it}
  cmd347: mon set-rank <name> <int>
  cmd348: mon set-addrs <name> <addrs>
  cmd349: mon enable-msgr2
  cmd350: osd stat
  cmd351: osd dump {<int[0-]>}
  cmd352: osd tree {<int[0-]>} {up|down|in|out|destroyed [up|down|in|out|destroyed...]}
  cmd353: osd tree-from {<int[0-]>} <bucket> {up|down|in|out|destroyed [up|down|in|out|destroyed...]}
  cmd354: osd ls {<int[0-]>}
  cmd355: osd getmap {<int[0-]>}
  cmd356: osd getcrushmap {<int[0-]>}
  cmd357: osd getmaxosd
  cmd358: osd ls-tree {<int[0-]>} <name>
  cmd359: osd find <osdname (id|osd.id)>
  cmd360: osd metadata {<osdname (id|osd.id)>}
  cmd361: osd count-metadata <property>
  cmd362: osd versions
  cmd363: osd numa-status
  cmd364: osd map <poolname> <objectname> {<nspace>}
  cmd365: osd lspools
  cmd366: osd crush rule list
  cmd367: osd crush rule ls
  cmd368: osd crush rule ls-by-class <class>
  cmd369: osd crush rule dump {<name>}
  cmd370: osd crush dump
  cmd371: osd setcrushmap {<int>}
  cmd372: osd crush set {<int>}
  cmd373: osd crush add-bucket <name> <type> {<args> [<args>...]}
  cmd374: osd crush rename-bucket <srcname> <dstname>
  cmd375: osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
  cmd376: osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
  cmd377: osd crush set-all-straw-buckets-to-straw2
  cmd378: osd crush class create <class>
  cmd379: osd crush class rm <class>
  cmd380: osd crush set-device-class <class> <ids> [<ids>...]
  cmd381: osd crush rm-device-class <ids> [<ids>...]
  cmd382: osd crush class rename <srcname> <dstname>
  cmd383: osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
  cmd384: osd crush move <name> <args> [<args>...]
  cmd385: osd crush swap-bucket <source> <dest> {--yes-i-really-mean-it}
  cmd386: osd crush link <name> <args> [<args>...]
  cmd387: osd crush rm <name> {<ancestor>}
  cmd388: osd crush remove <name> {<ancestor>}
  cmd389: osd crush unlink <name> {<ancestor>}
  cmd390: osd crush reweight-all
  cmd391: osd crush reweight <name> <float[0.0-]>
  cmd392: osd crush reweight-subtree <name> <float[0.0-]>
  cmd393: osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default
  cmd394: osd crush set-tunable straw_calc_version <int>
  cmd395: osd crush get-tunable straw_calc_version
  cmd396: osd crush show-tunables
  cmd397: osd crush rule create-simple <name> <root> <type> {firstn|indep}
  cmd398: osd crush rule create-replicated <name> <root> <type> {<class>}
  cmd399: osd crush rule create-erasure <name> {<profile>}
  cmd400: osd crush rule rm <name>
  cmd401: osd crush rule rename <srcname> <dstname>
  cmd402: osd crush tree {--show-shadow}
  cmd403: osd crush ls <node>
  cmd404: osd crush class ls
  cmd405: osd crush class ls-osd <class>
  cmd406: osd crush get-device-class <ids> [<ids>...]
  cmd407: osd crush weight-set ls
  cmd408: osd crush weight-set dump
  cmd409: osd crush weight-set create-compat
  cmd410: osd crush weight-set create <poolname> flat|positional
  cmd411: osd crush weight-set rm <poolname>
  cmd412: osd crush weight-set rm-compat
  cmd413: osd crush weight-set reweight <poolname> <item> <float[0.0-]> [<float[0.0-]>...]
  cmd414: osd crush weight-set reweight-compat <item> <float[0.0-]> [<float[0.0-]>...]
  cmd415: osd setmaxosd <int[0-]>
  cmd416: osd set-full-ratio <float[0.0-1.0]>
  cmd417: osd set-backfillfull-ratio <float[0.0-1.0]>
  cmd418: osd set-nearfull-ratio <float[0.0-1.0]>
  cmd419: osd get-require-min-compat-client
  cmd420: osd set-require-min-compat-client <version> {--yes-i-really-mean-it}
  cmd421: osd pause
  cmd422: osd unpause
  cmd423: osd erasure-code-profile set <name> {<profile> [<profile>...]} {--force}
  cmd424: osd erasure-code-profile get <name>
  cmd425: osd erasure-code-profile rm <name>
  cmd426: osd erasure-code-profile ls
  cmd427: osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_hardlimit {--yes-i-really-mean-it}
  cmd428: osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
  cmd429: osd require-osd-release luminous|mimic|nautilus {--yes-i-really-mean-it}
  cmd430: osd down <ids> [<ids>...]
  cmd431: osd out <ids> [<ids>...]
  cmd432: osd in <ids> [<ids>...]
  cmd433: osd rm <ids> [<ids>...]
  cmd434: osd add-noup <ids> [<ids>...]
  cmd435: osd add-nodown <ids> [<ids>...]
  cmd436: osd add-noin <ids> [<ids>...]
  cmd437: osd add-noout <ids> [<ids>...]
  cmd438: osd rm-noup <ids> [<ids>...]
  cmd439: osd rm-nodown <ids> [<ids>...]
  cmd440: osd rm-noin <ids> [<ids>...]
  cmd441: osd rm-noout <ids> [<ids>...]
  cmd442: osd set-group <flags> <who> [<who>...]
  cmd443: osd unset-group <flags> <who> [<who>...]
  cmd444: osd reweight <osdname (id|osd.id)> <float[0.0-1.0]>
  cmd445: osd reweightn <weights>
  cmd446: osd force-create-pg <pgid> {--yes-i-really-mean-it}
  cmd447: osd pg-temp <pgid> {<osdname (id|osd.id)> [<osdname (id|osd.id)>...]}
  cmd448: osd pg-upmap <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]
  cmd449: osd rm-pg-upmap <pgid>
  cmd450: osd pg-upmap-items <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]
  cmd451: osd rm-pg-upmap-items <pgid>
  cmd452: osd primary-temp <pgid> <osdname (id|osd.id)>
  cmd453: osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
  cmd454: osd destroy-actual <osdname (id|osd.id)> {--yes-i-really-mean-it}
  cmd455: osd purge-new <osdname (id|osd.id)> {--yes-i-really-mean-it}
  cmd456: osd purge-actual <osdname (id|osd.id)> {--yes-i-really-mean-it}
  cmd457: osd lost <osdname (id|osd.id)> {--yes-i-really-mean-it}
  cmd458: osd create {<uuid>} {<osdname (id|osd.id)>}
  cmd459: osd new <uuid> {<osdname (id|osd.id)>}
  cmd460: osd blacklist add|rm <EntityAddr> {<float[0.0-]>}
  cmd461: osd blacklist ls
  cmd462: osd blacklist clear
  cmd463: osd pool mksnap <poolname> <snap>
  cmd464: osd pool rmsnap <poolname> <snap>
  cmd465: osd pool ls {detail}
  cmd466: osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>} {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}
  cmd467: osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} {--yes-i-really-really-mean-it-not-faking}
  cmd468: osd pool rm <poolname> {<poolname>} {--yes-i-really-really-mean-it} {--yes-i-really-really-mean-it-not-faking}
  cmd469: osd pool rename <poolname> <poolname>
  cmd470: osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ratio
  cmd471: osd pool set <poolname> size|min_size|pg_num|pgp_num|pgp_num_actual|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ratio <val> {--yes-i-really-mean-it}
  cmd472: osd pool set-quota <poolname> max_objects|max_bytes <val>
  cmd473: osd pool get-quota <poolname>
  cmd474: osd pool application enable <poolname> <app> {--yes-i-really-mean-it}
  cmd475: osd pool application disable <poolname> <app> {--yes-i-really-mean-it}
  cmd476: osd pool application set <poolname> <app> <key> <value>
  cmd477: osd pool application rm <poolname> <app> <key>
  cmd478: osd pool application get {<poolname>} {<app>} {<key>}
  cmd479: osd utilization
  cmd480: osd tier add <poolname> <poolname> {--force-nonempty}
  cmd481: osd tier rm <poolname> <poolname>
  cmd482: osd tier remove <poolname> <poolname>
  cmd483: osd tier cache-mode <poolname> none|writeback|forward|readonly|readforward|proxy|readproxy {--yes-i-really-mean-it}
  cmd484: osd tier set-overlay <poolname> <poolname>
  cmd485: osd tier rm-overlay <poolname>
  cmd486: osd tier remove-overlay <poolname>
  cmd487: osd tier add-cache <poolname> <poolname> <int[0-]>
  cmd488: config-key get <key>
  cmd489: config-key set <key> {<val>}
  cmd490: config-key put <key> {<val>}
  cmd491: config-key del <key>
  cmd492: config-key rm <key>
  cmd493: config-key exists <key>
  cmd494: config-key list
  cmd495: config-key ls
  cmd496: config-key dump {<key>}
  cmd497: mgr dump {<int[0-]>}
  cmd498: mgr fail <who>
  cmd499: mgr module ls
  cmd500: mgr services
  cmd501: mgr module enable <module> {--force}
  cmd502: mgr module disable <module>
  cmd503: mgr metadata {<who>}
  cmd504: mgr count-metadata <property>
  cmd505: mgr versions
  cmd506: config set <who> <name> <value> {--force}
  cmd507: config rm <who> <name>
  cmd508: config get <who> {<key>}
  cmd509: config dump
  cmd510: config help <key>
  cmd511: config ls
  cmd512: config assimilate-conf
  cmd513: config log {<int>}
  cmd514: config reset <int>
  cmd515: config generate-minimal-conf
  cmd516: smart {<devid>}
  validate_command: osd pool create qfblockdevsnoc2 32
  better match: 0.5 > 0: pg stat 
  better match: 0.5 > 0.5: pg getmap
  better match: 0.5 > 0.5: pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
  better match: 0.5 > 0.5: pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}
  better match: 0.5 > 0.5: pg dump_pools_json
  better match: 0.5 > 0.5: pg ls-by-pool <poolstr> {<states> [<states>...]}
  better match: 0.5 > 0.5: pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
  better match: 0.5 > 0.5: pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
  better match: 0.5 > 0.5: pg ls {<int>} {<states> [<states>...]}
  better match: 0.5 > 0.5: pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
  better match: 0.5 > 0.5: pg debug unfound_objects_exist|degraded_pgs_exist
  better match: 0.5 > 0.5: pg scrub <pgid>
  better match: 0.5 > 0.5: pg deep-scrub <pgid>
  better match: 0.5 > 0.5: pg repair <pgid>
  better match: 0.5 > 0.5: pg force-recovery <pgid> [<pgid>...]
  better match: 0.5 > 0.5: pg force-backfill <pgid> [<pgid>...]
  better match: 0.5 > 0.5: pg cancel-force-recovery <pgid> [<pgid>...]
  better match: 0.5 > 0.5: pg cancel-force-backfill <pgid> [<pgid>...]
  better match: 1.5 > 0.5: osd perf
  better match: 1.5 > 1.5: osd df {plain|tree} {class|name} {<filter>}
  better match: 1.5 > 1.5: osd blocked-by
  better match: 2.5 > 1.5: osd pool stats {<poolname>}
  better match: 2.5 > 2.5: osd pool scrub <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool deep-scrub <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool repair <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool force-recovery <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool force-backfill <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool cancel-force-recovery <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool cancel-force-backfill <poolname> [<poolname>...]
  better match: 2.5 > 2.5: osd pool autoscale-status
  better match: 2.5 > 2.5: osd pool mksnap <poolname> <snap>
  better match: 2.5 > 2.5: osd pool rmsnap <poolname> <snap>
  better match: 2.5 > 2.5: osd pool ls {detail}
  better match: 5.5 > 2.5: osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>} {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}
  bestcmds_sorted:
  [{'flags': 0,
  'help': 'create pool',
  'module': 'osd',
  'perm': 'rw',
  'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=osd),
  argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=pool),
  argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=create),
  argdesc(<class 'ceph_argparse.CephPoolname'>, req=True, name=pool, n=1, numseen=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=True, name=pg_num, n=1, numseen=0, range=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=pgp_num, n=1, numseen=0, range=0),
  argdesc(<class 'ceph_argparse.CephChoices'>, req=False, name=pool_type, n=1, numseen=0, strings=replicated|erasure),
  argdesc(<class 'ceph_argparse.CephString'>, req=False, name=erasure_code_profile, n=1, numseen=0, goodchars=[A-Za-z0-9-_.]),
  argdesc(<class 'ceph_argparse.CephString'>, req=False, name=rule, n=1, numseen=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=expected_num_objects, n=1, numseen=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=size, n=1, numseen=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=pg_num_min, n=1, numseen=0, range=0),
  argdesc(<class 'ceph_argparse.CephInt'>, req=False, name=target_size_bytes, n=1, numseen=0, range=0),
  argdesc(<class 'ceph_argparse.CephFloat'>, req=False, name=target_size_ratio, n=1, numseen=0, range=0|1)]}]
  Submitting command:  {'prefix': 'osd pool create', 'pool': 'qfblockdevsnoc2', 'pg_num': 32}
  ... minutes pass here, with no output of any sort.---

  tail of /var/log/ceph/ceph-mon.xxx.log:
  2019-11-26 23:24:50.474 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 calling monitor election
  2019-11-26 23:24:50.478 7f45164f9700  1 mon.nocsupport1 at 0(electing).elector(37321) init, last seen epoch 37321, mid-election, bumping
  2019-11-26 23:24:50.654 7f45164f9700 -1 mon.nocsupport1 at 0(electing) e9 failed to get devid for : fallback method has serial ''but no model
  2019-11-26 23:24:50.742 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 calling monitor election
  2019-11-26 23:24:50.742 7f45164f9700  1 mon.nocsupport1 at 0(electing).elector(37325) init, last seen epoch 37325, mid-election, bumping
  2019-11-26 23:24:50.906 7f45164f9700 -1 mon.nocsupport1 at 0(electing) e9 failed to get devid for : fallback method has serial ''but no model
  2019-11-26 23:24:51.054 7f45164f9700  0 log_channel(cluster) log [INF] : mon.nocsupport1 is new leader, mons nocsupport1,nocsupport2,nocsupport4,nocsupport3,sysmon1 in quorum (ranks 0,1,2,3,4)
  2019-11-26 23:24:51.290 7f45164f9700  0 mon.nocsupport1 at 0(leader) e9 handle_command mon_command({"prefix": "osd pool create", "pool": "qfblockdevsnoc2", "pg_num": 32} v 0) v1
  2019-11-26 23:24:51.290 7f45164f9700  0 log_channel(audit) log [INF] : from='client.? ' entity='' cmd=[{"prefix": "osd pool create", "pool": "qfblockdevsnoc2", "pg_num": 32}]: dispatch
  2019-11-26 23:24:51.658 7f4512cf2700  0 log_channel(cluster) log [DBG] : monmap e9: 5 mons at {nocsupport1=[v2:[fc00:1002:c7::b1]:3300/0,v1:[fc00:1002:c7::b1]:6789/0],nocsupport2=[v2:[fc00:1002:c7::b2]:3300/0,v1:[fc00:1002:c7::b2]:6789/0],nocsupport3=[v2:[fc00:1002:c7::b3]:3300/0,v1:[fc00:1002:c7::b3]:6789/0],nocsupport4=[v2:[fc00:1002:c7::b4]:3300/0,v1:[fc00:1002:c7::b4]:6789/0],sysmon1=[v2:[fc00:1002:c7::152]:3300/0,v1:[fc00:1002:c7::152]:6789/0]}
  2019-11-26 23:24:51.662 7f4512cf2700  0 log_channel(cluster) log [DBG] : fsmap qflibraryfs:1 {0=nocsupport3=up:active} 3 up:standby
  2019-11-26 23:24:51.662 7f4512cf2700  0 log_channel(cluster) log [DBG] : osdmap e47268: 24 total, 24 up, 24 in
  2019-11-26 23:24:51.682 7f4512cf2700  0 log_channel(cluster) log [DBG] : mgrmap e1379: nocsupport3(active, since 8h), standbys: nocsupport1, nocsupport4, nocsupport2
  2019-11-26 23:24:51.686 7f4512cf2700  0 log_channel(cluster) log [WRN] : overall HEALTH_WARN Degraded data redundancy: 363722/20360571 objects degraded (1.786%), 83 pgs degraded, 83 pgs undersized

  when I ^C after minutes of waiting:

  ^CInterrupted
  Traceback (most recent call last):
  File "/usr/bin/ceph", line 573, in do_command
  argdict=valid_dict, inbuf=inbuf)
  File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1459, in json_command
  inbuf, timeout, verbose)
  File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1329, in send_command_retry
  return send_command(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1389, in send_command
  cluster.mon_command, cmd, inbuf, timeout=timeout)
  File "/usr/lib/python3/dist-packages/ceph_argparse.py", line 1311, in run_in_thread
  t.join(timeout=timeout)
  File "/usr/lib/python3.7/threading.py", line 1048, in join
  self._wait_for_tstate_lock(timeout=max(timeout, 0))
  File "/usr/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
  elif lock.acquire(block, timeout):
  KeyboardInterrupt
  During handling of the above exception, another exception occurred:
  Traceback (most recent call last):
  File "/usr/bin/ceph", line 1263, in <module>
  retval = main()
  File "/usr/bin/ceph", line 1194, in main
  verbose)
  File "/usr/bin/ceph", line 619, in new_style_command
  ret, outbuf, outs = do_command(parsed_args, target, cmdargs, sigdict, inbuf, verbose)
  File "/usr/bin/ceph", line 593, in do_command
  return ret, '', ''
  UnboundLocalError: local variable 'ret' referenced before assignment
  root at sysmon1:/etc/ceph#uname -a
  Linux sysmon1.1.quietfountain.com 5.3.0-23-generic #25-Ubuntu SMP Tue Nov 12 09:22:33 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  root at sysmon1:/etc/ceph# apt list ceph-common
  Listing... Done
  ceph-common/eoan,now 14.2.2-0ubuntu3 amd64 [installed,automatic]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1854129/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list