v17.2.8 open 65% 72 issues (47 closed — 25 open) Related issues Bug #63918: 17.2.7 ceph-volume errors out if no valid s Actions bluestore - Bug #64444: No Valid allocation info on disk (empty file) Actions ceph-volume - Bug #64560: ceph-volume: when create osd, vgcreate stderr failed to find PV Actions Orchestrator - Bug #64424: Ceph orch unsuitable for stateless / RAM-booted hosts Actions RADOS - Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs Actions RADOS - Bug #64997: There is always an osd process that takes up high cpu Actions rgw - Bug #61710: quincy/pacific: PUT requests during reshard of versioned bucket fail with 404 and leave behind dark data Actions rgw - Bug #64527: Radosgw 504 timeouts & Garbage collection is frozen Actions
v18.2.3 open 75% 74 issues (56 closed — 18 open) Related issues Bug #64295: Ceph exporter does not produce usable RGW metrics in k8 envs Actions CephFS - Bug #64441: reef: qa: add upgrade testing from minor release of reef (v18.2.[01]) to reef HEAD Actions Messengers - Documentation #65537: RDMA support Actions Dashboard - Bug #65698: mgr/dashboard: RBD snapshots cloned (format v2) and then deleted causes Not Found/404 in the source RBD Image Actions Orchestrator - Bug #65691: cephadm: set "osd - profile rbd" for nvmeof service Actions Orchestrator - Bug #65739: Cephadm adopt doesn't support "--no-cgroups-split" flag Actions rgw - Bug #64999: Slow RGW multisite sync due to "304 Not Modified" responses on primary zone Actions rgw - Bug #65794: Ceph Reef RGW error response fails to be parsed during awscli create-bucket Actions rgw - Feature #65050: Add alternative way for providing user name/password for Kafka endpoint authentication Actions rgw - Feature #65178: Change default stirp size to 5MB to match multipart uploads Actions
v19.1.0 open 84% 69 issues (58 closed — 11 open) Related issues bluestore - Bug #63308: crash: ZonedAllocator::ZonedAllocator Actions CephFS - Bug #55725: MDS allows a (kernel) client to exceed the xattrs key/value limits Actions CephFS - Bug #56067: Cephfs data loss with root_squash enabled Actions CephFS - Bug #64490: mds: some request errors come from errno.h rather than fs_types.h Actions CephFS - Bug #64503: client: log message when unmount call is received Actions CephFS - Bug #64748: reef: snaptest-git-ceph.sh failure Actions CephFS - Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes Actions Dashboard - Feature #64890: mgr/dashboard: update NVMe-oF API Actions Dashboard - Feature #65268: mgr/dashboard: update NVMe-oF API "listener add" sync Actions Orchestrator - Bug #65387: cephadm: Unable to use gather-facts without podman/docker installed Actions Orchestrator - Bug #65703: qa/suites/fs/upgrade: Command failed ... ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 Actions RADOS - Bug #62338: osd: choose_async_recovery_ec may select an acting set < min_size Actions RADOS - Bug #65591: Pool MAX_AVAIL goes UP when an OSD is marked down+in Actions rgw - Bug #65243: RGW/s3select : several issues, s3select related, some caused a crash. Actions rgw - Feature #64190: support lifecycle NewerNoncurrentVersions in NoncurrentVersionExpiration Actions rgw - Feature #65131: perf counters for CreateMultipartUpload, AbortMultipartUpload, CompleteMultipartUpload Actions
v20.0.0 open T release 13% 157 issues (18 closed — 139 open) Related issues Bug #63494: all: daemonizing may release CephContext:: _fork_watchers_lock when its already unlocked Actions Bug #65612: qa: logrotate fails when state file is already locked Actions Feature #65747: common/admin_socket: support saving json output to a file local to the daemon Actions bluestore - Bug #52513: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15 Actions bluestore - Bug #58274: BlueStore::collection_list becomes extremely slow due to unbounded rocksdb iteration Actions bluestore - Bug #64511: kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on the default column family Actions bluestore - Bug #64533: BlueFS: l_bluefs_log_compactions is counted twice in sync log compaction Actions bluestore - Bug #65678: Cannot use BtreeAllocator for blustore or bluefs Actions CephFS - Bug #40159: mds: openfiletable prefetching large amounts of inodes lead to mds start failure Actions CephFS - Bug #40197: The command 'node ls' sometimes output some incorrect information about mds. Actions CephFS - Bug #48562: qa: scrub - object missing on disk; some files may be lost Actions CephFS - Bug #50821: qa: untar_snap_rm failure during mds thrashing Actions CephFS - Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details Actions CephFS - Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings Actions CephFS - Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'} Actions CephFS - Bug #62123: mds: detect out-of-order locking Actions CephFS - Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read' Actions CephFS - Bug #62664: ceph-fuse: failed to remount for kernel dentry trimming; quitting! Actions CephFS - Bug #63866: mount command returning misleading error message Actions CephFS - Bug #63931: qa: test_mirroring_init_failure_with_recovery failure Actions CephFS - Bug #64008: mds: CInode::item_caps used in two different lists Actions CephFS - Bug #64064: mds config `mds_log_max_segments` throws error for value -1 Actions CephFS - Bug #64198: mds: Fcb caps issued to clients when filelock is xlocked Actions CephFS - Bug #64477: pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.XXX' denied Actions CephFS - Bug #64486: qa: enhance labeled perf counters test for cephfs-mirror Actions CephFS - Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main Actions CephFS - Bug #64537: mds: lower the log level when rejecting a session reclaim request Actions CephFS - Bug #64542: Difference in error code returned while removing system xattrs using removexattr() Actions CephFS - Bug #64563: mds: enhance laggy clients detections due to laggy OSDs Actions CephFS - Bug #64572: workunits/fsx.sh failure Actions CephFS - Bug #64602: tools/cephfs: cephfs-journal-tool does not recover dentries with alternate_name Actions CephFS - Bug #64616: selinux denials with centos9.stream Actions CephFS - Bug #64641: qa: Add multifs root_squash testcase Actions CephFS - Bug #64685: mds: disable defer_client_eviction_on_laggy_osds by default Actions CephFS - Bug #64700: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) Actions CephFS - Bug #64707: suites/fsstress.sh hangs on one client - test times out Actions CephFS - Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring) Actions CephFS - Bug #64717: MDS stuck in replay/resolve use Actions CephFS - Bug #64729: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log Actions CephFS - Bug #64730: fs/misc/multiple_rsync.sh workunit times out Actions CephFS - Bug #64746: qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to health ignorelist. Actions CephFS - Bug #64747: postgresql pkg install failure Actions CephFS - Bug #64751: cephfs-mirror coredumped when acquiring pthread mutex Actions CephFS - Bug #64752: cephfs-mirror: valgrind report leaks Actions CephFS - Bug #64761: cephfs-mirror: add throttling to mirror daemon ops Actions CephFS - Bug #64912: make check: QuiesceDbTest.MultiRankRecovery Failed Actions CephFS - Bug #64947: qa: fix continued use of log-whitelist Actions CephFS - Bug #64985: qa: mgr logs do not include client debugging Actions CephFS - Bug #64986: qa: "cluster [WRN] Health detail: HEALTH_WARN 1 filesystem is online with fewer MDS than max_mds" in cluster log " Actions CephFS - Bug #64987: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log " Actions CephFS - Bug #64988: qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds" Actions CephFS - Bug #65001: mds: ceph-mds might silently ignore client_session(request_close, ...) message Actions CephFS - Bug #65018: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)" Actions CephFS - Bug #65019: qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log Actions CephFS - Bug #65020: qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log Actions CephFS - Bug #65021: qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log Actions CephFS - Bug #65022: qa: test_max_items_per_obj open procs not fully cleaned up Actions CephFS - Bug #65039: mds: standby-replay segmentation fault in md_log_replay Actions CephFS - Bug #65043: Unable to set timestamp to value > UINT32_MAX Actions CephFS - Bug #65073: pybind/mgr/stats/fs: log exceptions to cluster log Actions CephFS - Bug #65094: mds STATE_STARTING won't add root ino for root rank and not correctly handle when fails at STATE_STARTING Actions CephFS - Bug #65116: squid: kclient: "ld: final link failed: Resource temporarily unavailable" Actions CephFS - Bug #65136: QA failure: test_fscrypt_dummy_encryption_with_quick_group Actions CephFS - Bug #65157: cephfs-mirror: set layout.pool_name xattr of destination subvol correctly Actions CephFS - Bug #65171: Provide metrics support for the Replication Start/End Notifications Actions CephFS - Bug #65182: mds: quiesce_inode op waiting on remote auth pins is not killed correctly during quiesce timeout/expiration Actions CephFS - Bug #65224: mds: fs subvolume rm fails Actions CephFS - Bug #65225: ceph_assert on dn->get_projected_linkage()->is_remote Actions CephFS - Bug #65246: qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize) Actions CephFS - Bug #65260: mds: Reduce log level for messages when mds is stopping Actions CephFS - Bug #65262: qa/cephfs: kernel_untar_build.sh failed due to build error Actions CephFS - Bug #65265: qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs Actions CephFS - Bug #65271: qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log Actions CephFS - Bug #65276: MDS daemon is using 50% CPU when idle Actions CephFS - Bug #65301: fs:upgrade still uses centos_8* distro Actions CephFS - Bug #65308: qa: fs was offline but also unexpectedly degraded Actions CephFS - Bug #65309: qa: dbench.sh failed with "ERROR: handle 10318 was not found" Actions CephFS - Bug #65314: valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int) Actions CephFS - Bug #65342: mds: quiesce_counter decay rate initialized from wrong config Actions CephFS - Bug #65350: mgr/snap_schedule: restore yearly spec from uppercase Y to lowercase y Actions CephFS - Bug #65372: qa: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'} Actions CephFS - Bug #65388: The MDS_SLOW_REQUEST warning is flapping even though the slow requests don't go away Actions CephFS - Bug #65389: The ceph_readdir function in libcephfs returns incorrect d_reclen value Actions CephFS - Bug #65472: mds: avoid recalling Fb when quiescing file Actions CephFS - Bug #65496: mds: ceph.dir.subvolume and ceph.quiesce.blocked is not properly replicated Actions CephFS - Bug #65508: qa: lockup not long enough to for test_quiesce_authpin_wait Actions CephFS - Bug #65518: mds: regular file inode flags are not replicated by the policylock Actions CephFS - Bug #65545: Quiesce may fail randomly with EBADF due to the same root submitted to the MDCache multiple times under the same quiesce request Actions CephFS - Bug #65564: Test failure: test_snap_schedule_subvol_and_group_arguments_08 (tasks.cephfs.test_snap_schedules.TestSnapSchedulesSubvolAndGroupArguments) Actions CephFS - Bug #65572: Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 1 Actions CephFS - Bug #65580: mds/client: add dummy client feature to test client eviction Actions CephFS - Bug #65595: mds: missing policylock acquisition for quiesce Actions CephFS - Bug #65603: mds: quiesce timeout due to a freezing directory Actions CephFS - Bug #65606: workload fails due to slow ops, assert in logs mds/Locker.cc: 551 FAILED ceph_assert(!lock->is_waiter_for(SimpleLock::WAIT_WR) || lock->is_waiter_for(SimpleLock::WAIT_XLOCK)) Actions CephFS - Bug #65614: client: resends request to same MDS it just received a forward from if it does not have an open session with the target Actions CephFS - Bug #65616: pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed (RuntimeError: The following counters failed to be set on mds daemons: {'mds_server.req_rmsnap_latency.avgcount'}) Actions CephFS - Bug #65618: qa: fsstress: cannot execute binary file: Exec format error Actions CephFS - Bug #65647: Evicted kernel client may get stuck after reconnect Actions CephFS - Bug #65658: mds: MetricAggregator::ms_can_fast_dispatch2 acquires locks Actions CephFS - Bug #65660: mds: drop client metrics during recovery Actions CephFS - Bug #65669: QuiesceDB responds with a misleading error to a quiesce-await of a terminated set. Actions CephFS - Bug #65700: qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded" in cluster log Actions CephFS - Bug #65701: qa: quiesce cache/ops dump not world readable Actions CephFS - Bug #65704: mds+valgrind: beacon thread blocked for 60+ seconds Actions CephFS - Bug #65705: qa: snaptest-multiple-capsnaps.sh failure Actions CephFS - Bug #65716: mds: dir merge can't progress due to fragment nested pins, blocking the quiesce_path and causing a quiesce timeout Actions CephFS - Bug #65733: mds: upgrade to MDS enforcing CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK with client having root_squash in any MDS cap causes eviction for all file systems the client has caps for Actions CephFS - Fix #63432: qa: run TestSnapshots.test_kill_mdstable for all mount types Actions CephFS - Fix #64984: qa: probabilistically ignore PG_AVAILABILITY/PG_DEGRADED Actions CephFS - Fix #65408: qa: under valgrind, restart valgrind/mds when MDS exits with 0 Actions CephFS - Fix #65579: mds: use _exit for QA killpoints rather than SIGABRT Actions CephFS - Fix #65617: qa: increase debugging for snap_schedule Actions CephFS - Feature #57481: mds: enhance scrub to fragment/merge dirfrags Actions CephFS - Feature #61334: cephfs-mirror: use snapdiff api for efficient tree traversal Actions CephFS - Feature #63374: mds: add asok command to kill/respond to request Actions CephFS - Feature #63663: mds,client: add crash-consistent snapshot support Actions CephFS - Feature #63664: mds: add quiesce protocol for halting I/O on subvolumes Actions CephFS - Feature #63665: mds: QuiesceDb to manage subvolume quiesce state Actions CephFS - Feature #63666: mds: QuiesceAgent to execute quiesce operations on an MDS rank Actions CephFS - Feature #63668: pybind/mgr/volumes: add quiesce protocol API Actions CephFS - Feature #64506: qa: update fs:upgrade to test from reef/squid to main Actions CephFS - Feature #64507: pybind/mgr/snap_schedule: support crash-consistent snapshots Actions CephFS - Feature #64531: mds,mgr: identify metadata heavy workloads Actions CephFS - Feature #65503: mgr/stats, cephfs-top: provide per volume/sub-volume based performance metrics to monitor / troubleshoot performance issues Actions CephFS - Feature #65637: mds: continue sending heartbeats during recovery when MDS journal is large Actions CephFS - Cleanup #65689: mds: move specialized cleanup for fragment_dir to MDCache::request_cleanup Actions CephFS - Cleanup #65690: mds: move specialized cleanup for export_dir to MDCache::request_cleanup Actions CephFS - Tasks #63707: mds: AdminSocket command to control the QuiesceDbManager Actions CephFS - Tasks #63708: mds: MDS message transport for inter-rank QuiesceDbManager communications Actions cephsqlite - Bug #65494: ceph-mgr critical error: "Module 'devicehealth' has failed: table Device already exists" Actions Linux kernel client - Bug #64471: kernel: upgrades from quincy/v18.2.[01]/reef to main|squid fail with kernel oops Actions mgr - Bug #52846: octopus: mgr fails and freezes while doing pg dump Actions mgr - Bug #64799: mgr: update cluster state for new maps from the mons before notifying modules Actions Dashboard - Bug #49124: mgr/dashboard: NFS settings aren't updated after modifying them when working with Rook orchestrator Actions Orchestrator - Bug #65263: upgrade stalls after upgrading one ceph-mgr daemon Actions Orchestrator - Bug #65546: quincy|reef: qa/suites/upgrade/pacific-x: failure to pull image causes dead jobs Actions Orchestrator - Bug #65657: doc: lack of clarity for explicit placement analogue in yaml spec Actions Orchestrator - Feature #65338: Add --continue-on-error for `cephadm bootstrap` Actions RADOS - Bug #23565: Inactive PGs don't seem to cause HEALTH_ERR Actions RADOS - Bug #47813: osd op age is 4294967296 Actions RADOS - Bug #64968: mon: MON_DOWN warnings when mons are first booting Actions RADOS - Bug #64972: qa: "ceph tell 4.3a deep-scrub" command not found Actions RADOS - Feature #54525: osd/mon: log memory usage during tick Actions nvme-of - Fix #64821: cephadm - make changes to ceph-nvmeof.conf template Actions nvme-of - Feature #64777: mon: add NVMe-oF gateway monitor and HA Actions nvme-of - Feature #65259: cephadm - make changes to ceph-nvmeof.conf template Actions nvme-of - Feature #65566: Change some default values for OMAP lock parameters in nvmeof conf file Actions rgw - Bug #46702: rgw: lc: lifecycle rule with more than one prefix in RGWPutLC::execute() should throw error Actions rgw - Bug #49615: can't get mdlog when rgw_run_sync_thread = false Actions rgw - Bug #50261: rgw: system users can't issue role policy related ops without explicit user policy Actions rgw - Bug #63428: RGW: multipart get wrong storage class metadata Actions rgw - Bug #64875: rgw: rgw-restore-bucket-index -- sort uses specified temp dir Actions rgw - Bug #65216: rgw: only accept valid ipv4 from host header Actions rgw - Bug #65277: rgw: update options yaml file so LDAP uri isn't an invalid example Actions rgw - Bug #65337: rgw: Segmentation fault in rgw::notify::Manager during realm reload Actions rgw - Feature #65769: rgw: make incomplete multipart upload part of bucket check efficient Actions rgw - Documentation #49649: add information on the system objects holding notifications Actions