⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
All Projects
Ceph
Overview
Activity
Roadmap
Issues
Spent time
Gantt
Calendar
Wiki
Repository
v20.0.0
open
T release
9%
304 issues
(
21 closed
—
283 open
)
Time tracking
Estimated time
0
:00
hour
Spent time
0
:00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
13/228
Fix
0/7
Feature
5/54
Cleanup
0/4
Tasks
2/3
Documentation
1/8
Related issues
RADOS -
Bug #23565
: Inactive PGs don't seem to cause HEALTH_ERR
Actions
CephFS -
Bug #23723
: qa: incorporate smallfile workload
Actions
CephFS -
Bug #40159
: mds: openfiletable prefetching large amounts of inodes lead to mds start failure
Actions
CephFS -
Bug #40197
: The command 'node ls' sometimes output some incorrect information about mds.
Actions
rgw -
Bug #46702
: rgw: lc: lifecycle rule with more than one prefix in RGWPutLC::execute() should throw error
Actions
RADOS -
Bug #47813
: osd op age is 4294967296
Actions
CephFS -
Bug #48562
: qa: scrub - object missing on disk; some files may be lost
Actions
Dashboard -
Bug #49124
: mgr/dashboard: NFS settings aren't updated after modifying them when working with Rook orchestrator
Actions
rgw -
Bug #49615
: can't get mdlog when rgw_run_sync_thread = false
Actions
rgw -
Bug #50261
: rgw: system users can't issue role policy related ops without explicit user policy
Actions
CephFS -
Bug #50821
: qa: untar_snap_rm failure during mds thrashing
Actions
CephFS -
Bug #51197
: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
Actions
CephFS -
Bug #51282
: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings
Actions
bluestore -
Bug #52513
: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
Actions
CephFS -
Bug #52581
: Dangling fs snapshots on data pool after change of directory layout
Actions
mgr -
Bug #52846
: octopus: mgr fails and freezes while doing pg dump
Actions
CephFS -
Bug #54741
: crash: MDSTableClient::got_journaled_ack(unsigned long)
Actions
CephFS -
Bug #55446
: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
Actions
CephFS -
Bug #55464
: cephfs: mds/client error when client stale reconnect
Actions
CephFS -
Bug #57676
: qa: error during scrub thrashing: rank damage found: {'backtrace'}
Actions
CephFS -
Bug #57682
: client: ERROR: test_reconnect_after_blocklisted
Actions
CephFS -
Bug #58244
: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
Actions
bluestore -
Bug #58274
: BlueStore::collection_list becomes extremely slow due to unbounded rocksdb iteration
Actions
CephFS -
Bug #58878
: mds: FAILED ceph_assert(trim_to > trimming_pos)
Actions
CephFS -
Bug #58938
: qa: xfstests-dev's generic test suite has 7 failures with kclient
Actions
CephFS -
Bug #58945
: qa: xfstests-dev's generic test suite has 20 failures with fuse client
Actions
CephFS -
Bug #58962
: ftruncate fails with EACCES on a read-only file created with write permissions
Actions
CephFS -
Bug #59119
: mds: segmentation fault during replay of snaptable updates
Actions
CephFS -
Bug #59163
: mds: stuck in up:rejoin when it cannot "open" missing directory inode
Actions
CephFS -
Bug #59169
: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
Actions
CephFS -
Bug #59301
: pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemons to become healthy
Actions
CephFS -
Bug #59394
: ACLs not fully supported.
Actions
CephFS -
Bug #59530
: mgr-nfs-upgrade: mds.foofs has 0/2
Actions
CephFS -
Bug #59534
: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output error)"
Actions
CephFS -
Bug #59688
: mds: idempotence issue in client request
Actions
CephFS -
Bug #61279
: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
Actions
CephFS -
Bug #61357
: cephfs-data-scan: parallelize cleanup step
Actions
CephFS -
Bug #61407
: mds: abort on CInode::verify_dirfrags
Actions
CephFS -
Bug #61790
: cephfs client to mds comms remain silent after reconnect
Actions
CephFS -
Bug #61791
: snaptest-git-ceph.sh test timed out (job dead)
Actions
CephFS -
Bug #61831
: qa: test_mirroring_init_failure_with_recovery failure
Actions
CephFS -
Bug #61945
: LibCephFS.DelegTimeout failure
Actions
CephFS -
Bug #61978
: cephfs-mirror: support fan out setups
Actions
CephFS -
Bug #61982
: Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
Actions
CephFS -
Bug #62067
: ffsb.sh failure "Resource temporarily unavailable"
Actions
CephFS -
Bug #62123
: mds: detect out-of-order locking
Actions
CephFS -
Bug #62126
: test failure: suites/blogbench.sh stops running
Actions
CephFS -
Bug #62158
: mds: quick suspend or abort metadata migration
Actions
CephFS -
Bug #62188
: AttributeError: 'RemoteProcess' object has no attribute 'read'
Actions
CephFS -
Bug #62221
: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.TestMirroring)
Actions
CephFS -
Bug #62257
: mds: blocklist clients that are not advancing `oldest_client_tid`
Actions
CephFS -
Bug #62344
: tools/cephfs_mirror: mirror daemon logs reports initialisation failure for fs already deleted post test case execution
Actions
CephFS -
Bug #62381
: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
Actions
CephFS -
Bug #62484
: qa: ffsb.sh test failure
Actions
CephFS -
Bug #62485
: quincy (?): pybind/mgr/volumes: subvolume rm timeout
Actions
CephFS -
Bug #62511
: src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
Actions
CephFS -
Bug #62648
: pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete
Actions
CephFS -
Bug #62653
: qa: unimplemented fcntl command: 1036 with fsstress
Actions
CephFS -
Bug #62664
: ceph-fuse: failed to remount for kernel dentry trimming; quitting!
Actions
CephFS -
Bug #62673
: cephfs subvolume resize does not accept 'unit'
Actions
CephFS -
Bug #62720
: mds: identify selinux relabelling and generate health warning
Actions
CephFS -
Bug #62764
: qa: use stdin-killer for kclient mounts
Actions
CephFS -
Bug #62847
: mds: blogbench requests stuck (5mds+scrub+snaps-flush)
Actions
CephFS -
Bug #63089
: qa: tasks/mirror times out
Actions
CephFS -
Bug #63104
: qa: add libcephfs tests for async calls
Actions
CephFS -
Bug #63120
: mgr/nfs: support providing export ID while creating exports using 'nfs export create cephfs'
Actions
CephFS -
Bug #63212
: qa: failed to download ior.tbz2
Actions
CephFS -
Bug #63233
: mon|client|mds: valgrind reports possible leaks in the MDS
Actions
rgw -
Bug #63428
: RGW: multipart get wrong storage class metadata
Actions
CephFS -
Bug #63461
: Long delays when two threads modify the same directory
Actions
CephFS -
Bug #63471
: client: error code inconsistency when accessing a mount of a deleted dir
Actions
CephFS -
Bug #63473
: fsstressh.sh fails with errno 124
Actions
Bug #63494
: all: daemonizing may release CephContext:: _fork_watchers_lock when its already unlocked
Actions
CephFS -
Bug #63519
: ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
Actions
CephFS -
Bug #63634
: [RFC] limit iov structures to 1024 while performing async I/O
Actions
CephFS -
Bug #63697
: client: zero byte sync write fails
Actions
CephFS -
Bug #63700
: qa: test_cd_with_args failure
Actions
CephFS -
Bug #63726
: cephfs-shell: support bootstrapping via monitor address
Actions
CephFS -
Bug #63764
: Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
Actions
rgw -
Bug #63791
: RGW: a subuser with no permission can still list buckets and create buckets
Actions
CephFS -
Bug #63830
: MDS fails to start
Actions
CephFS -
Bug #63866
: mount command returning misleading error message
Actions
CephFS -
Bug #63896
: client: contiguous read fails for non-contiguous write (in async I/O api)
Actions
CephFS -
Bug #63931
: qa: test_mirroring_init_failure_with_recovery failure
Actions
CephFS -
Bug #63949
: leak in mds.c detected by valgrind during CephFS QA run
Actions
CephFS -
Bug #63999
: mgr/snap_schedule: clean up schedule timers on volume delete
Actions
CephFS -
Bug #64008
: mds: CInode::item_caps used in two different lists
Actions
CephFS -
Bug #64011
: qa: Command failed qa/workunits/suites/pjd.sh
Actions
CephFS -
Bug #64015
: fscrypt.sh - lsb_release command may not exist
Actions
CephFS -
Bug #64064
: mds config `mds_log_max_segments` throws error for value -1
Actions
CephFS -
Bug #64149
: valgrind+mds/client: gracefully shutdown the mds during valgrind tests
Actions
CephFS -
Bug #64198
: mds: Fcb caps issued to clients when filelock is xlocked
Actions
CephFS -
Bug #64298
: CephFS metadata pool has large OMAP objects corresponding to strays
Actions
CephFS -
Bug #64348
: mds: possible memory leak in up:rejoin when opening cap inodes (from OFT)
Actions
CephFS -
Bug #64389
: client: check if pools are full when mounting
Actions
CephFS -
Bug #64390
: client: async I/O stalls if the data pool gets full
Actions
Linux kernel client -
Bug #64471
: kernel: upgrades from quincy/v18.2.[01]/reef to main|squid fail with kernel oops
Actions
CephFS -
Bug #64477
: pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.XXX' denied
Actions
CephFS -
Bug #64486
: qa: enhance labeled perf counters test for cephfs-mirror
Actions
CephFS -
Bug #64502
: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
Actions
bluestore -
Bug #64511
: kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on the default column family
Actions
bluestore -
Bug #64533
: BlueFS: l_bluefs_log_compactions is counted twice in sync log compaction
Actions
CephFS -
Bug #64537
: mds: lower the log level when rejecting a session reclaim request
Actions
CephFS -
Bug #64542
: Difference in error code returned while removing system xattrs using removexattr()
Actions
CephFS -
Bug #64563
: mds: enhance laggy clients detections due to laggy OSDs
Actions
CephFS -
Bug #64572
: workunits/fsx.sh failure
Actions
CephFS -
Bug #64602
: tools/cephfs: cephfs-journal-tool does not recover dentries with alternate_name
Actions
CephFS -
Bug #64616
: selinux denials with centos9.stream
Actions
CephFS -
Bug #64641
: qa: Add multifs root_squash testcase
Actions
CephFS -
Bug #64685
: mds: disable defer_client_eviction_on_laggy_osds by default
Actions
CephFS -
Bug #64700
: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
Actions
CephFS -
Bug #64707
: suites/fsstress.sh hangs on one client - test times out
Actions
CephFS -
Bug #64711
: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
Actions
CephFS -
Bug #64717
: MDS stuck in replay/resolve use
Actions
CephFS -
Bug #64729
: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
Actions
CephFS -
Bug #64730
: fs/misc/multiple_rsync.sh workunit times out
Actions
CephFS -
Bug #64746
: qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to health ignorelist.
Actions
CephFS -
Bug #64747
: postgresql pkg install failure
Actions
CephFS -
Bug #64751
: cephfs-mirror coredumped when acquiring pthread mutex
Actions
CephFS -
Bug #64752
: cephfs-mirror: valgrind report leaks
Actions
CephFS -
Bug #64761
: cephfs-mirror: add throttling to mirror daemon ops
Actions
mgr -
Bug #64799
: mgr: update cluster state for new maps from the mons before notifying modules
Actions
rgw -
Bug #64875
: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
Actions
CephFS -
Bug #64912
: make check: QuiesceDbTest.MultiRankRecovery Failed
Actions
CephFS -
Bug #64947
: qa: fix continued use of log-whitelist
Actions
RADOS -
Bug #64968
: mon: MON_DOWN warnings when mons are first booting
Actions
RADOS -
Bug #64972
: qa: "ceph tell 4.3a deep-scrub" command not found
Actions
CephFS -
Bug #64985
: qa: mgr logs do not include client debugging
Actions
CephFS -
Bug #64986
: qa: "cluster [WRN] Health detail: HEALTH_WARN 1 filesystem is online with fewer MDS than max_mds" in cluster log "
Actions
CephFS -
Bug #64987
: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log "
Actions
CephFS -
Bug #64988
: qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
Actions
CephFS -
Bug #65001
: mds: ceph-mds might silently ignore client_session(request_close, ...) message
Actions
CephFS -
Bug #65018
: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
Actions
CephFS -
Bug #65019
: qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
Actions
CephFS -
Bug #65020
: qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
Actions
CephFS -
Bug #65021
: qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
Actions
CephFS -
Bug #65022
: qa: test_max_items_per_obj open procs not fully cleaned up
Actions
CephFS -
Bug #65039
: mds: standby-replay segmentation fault in md_log_replay
Actions
CephFS -
Bug #65043
: Unable to set timestamp to value > UINT32_MAX
Actions
CephFS -
Bug #65073
: pybind/mgr/stats/fs: log exceptions to cluster log
Actions
CephFS -
Bug #65094
: mds STATE_STARTING won't add root ino for root rank and not correctly handle when fails at STATE_STARTING
Actions
CephFS -
Bug #65116
: squid: kclient: "ld: final link failed: Resource temporarily unavailable"
Actions
CephFS -
Bug #65136
: QA failure: test_fscrypt_dummy_encryption_with_quick_group
Actions
CephFS -
Bug #65157
: cephfs-mirror: set layout.pool_name xattr of destination subvol correctly
Actions
CephFS -
Bug #65171
: Provide metrics support for the Replication Start/End Notifications
Actions
CephFS -
Bug #65182
: mds: quiesce_inode op waiting on remote auth pins is not killed correctly during quiesce timeout/expiration
Actions
rgw -
Bug #65216
: rgw: only accept valid ipv4 from host header
Actions
CephFS -
Bug #65224
: mds: fs subvolume rm fails
Actions
CephFS -
Bug #65225
: ceph_assert on dn->get_projected_linkage()->is_remote
Actions
CephFS -
Bug #65246
: qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
Actions
CephFS -
Bug #65260
: mds: Reduce log level for messages when mds is stopping
Actions
CephFS -
Bug #65262
: qa/cephfs: kernel_untar_build.sh failed due to build error
Actions
Orchestrator -
Bug #65263
: upgrade stalls after upgrading one ceph-mgr daemon
Actions
CephFS -
Bug #65265
: qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
Actions
CephFS -
Bug #65271
: qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log
Actions
CephFS -
Bug #65276
: MDS daemon is using 50% CPU when idle
Actions
rgw -
Bug #65277
: rgw: update options yaml file so LDAP uri isn't an invalid example
Actions
CephFS -
Bug #65301
: fs:upgrade still uses centos_8* distro
Actions
CephFS -
Bug #65308
: qa: fs was offline but also unexpectedly degraded
Actions
CephFS -
Bug #65309
: qa: dbench.sh failed with "ERROR: handle 10318 was not found"
Actions
CephFS -
Bug #65314
: valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)
Actions
CephFS -
Bug #65342
: mds: quiesce_counter decay rate initialized from wrong config
Actions
CephFS -
Bug #65345
: cephfs_mirror: increment sync_failures when sync_perms() and sync_snaps() fails
Actions
CephFS -
Bug #65350
: mgr/snap_schedule: restore yearly spec from uppercase Y to lowercase y
Actions
CephFS -
Bug #65372
: qa: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}
Actions
CephFS -
Bug #65388
: The MDS_SLOW_REQUEST warning is flapping even though the slow requests don't go away
Actions
CephFS -
Bug #65389
: The ceph_readdir function in libcephfs returns incorrect d_reclen value
Actions
CephFS -
Bug #65472
: mds: avoid recalling Fb when quiescing file
Actions
cephsqlite -
Bug #65494
: ceph-mgr critical error: "Module 'devicehealth' has failed: table Device already exists"
Actions
CephFS -
Bug #65496
: mds: ceph.dir.subvolume and ceph.quiesce.blocked is not properly replicated
Actions
CephFS -
Bug #65508
: qa: lockup not long enough to for test_quiesce_authpin_wait
Actions
CephFS -
Bug #65518
: mds: regular file inode flags are not replicated by the policylock
Actions
CephFS -
Bug #65536
: mds: after the unresponsive client was evicted the blocked slow requests were not successfully cleaned up
Actions
CephFS -
Bug #65545
: Quiesce may fail randomly with EBADF due to the same root submitted to the MDCache multiple times under the same quiesce request
Actions
Orchestrator -
Bug #65546
: quincy|reef: qa/suites/upgrade/pacific-x: failure to pull image causes dead jobs
Actions
CephFS -
Bug #65564
: Test failure: test_snap_schedule_subvol_and_group_arguments_08 (tasks.cephfs.test_snap_schedules.TestSnapSchedulesSubvolAndGroupArguments)
Actions
CephFS -
Bug #65572
: Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 1
Actions
CephFS -
Bug #65580
: mds/client: add dummy client feature to test client eviction
Actions
CephFS -
Bug #65595
: mds: missing policylock acquisition for quiesce
Actions
CephFS -
Bug #65603
: mds: quiesce timeout due to a freezing directory
Actions
CephFS -
Bug #65604
: dbench.sh workload times out after 3h when run with-quiescer
Actions
CephFS -
Bug #65606
: workload fails due to slow ops, assert in logs mds/Locker.cc: 551 FAILED ceph_assert(!lock->is_waiter_for(SimpleLock::WAIT_WR) || lock->is_waiter_for(SimpleLock::WAIT_XLOCK))
Actions
Bug #65612
: qa: logrotate fails when state file is already locked
Actions
CephFS -
Bug #65614
: client: resends request to same MDS it just received a forward from if it does not have an open session with the target
Actions
CephFS -
Bug #65616
: pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed (RuntimeError: The following counters failed to be set on mds daemons: {'mds_server.req_rmsnap_latency.avgcount'})
Actions
CephFS -
Bug #65618
: qa: fsstress: cannot execute binary file: Exec format error
Actions
CephFS -
Bug #65647
: Evicted kernel client may get stuck after reconnect
Actions
Orchestrator -
Bug #65657
: doc: lack of clarity for explicit placement analogue in yaml spec
Actions
CephFS -
Bug #65658
: mds: MetricAggregator::ms_can_fast_dispatch2 acquires locks
Actions
CephFS -
Bug #65660
: mds: drop client metrics during recovery
Actions
CephFS -
Bug #65669
: QuiesceDB responds with a misleading error to a quiesce-await of a terminated set.
Actions
bluestore -
Bug #65678
: Cannot use BtreeAllocator for blustore or bluefs
Actions
CephFS -
Bug #65700
: qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded" in cluster log
Actions
CephFS -
Bug #65701
: qa: quiesce cache/ops dump not world readable
Actions
CephFS -
Bug #65704
: mds+valgrind: beacon thread blocked for 60+ seconds
Actions
CephFS -
Bug #65705
: qa: snaptest-multiple-capsnaps.sh failure
Actions
CephFS -
Bug #65716
: mds: dir merge can't progress due to fragment nested pins, blocking the quiesce_path and causing a quiesce timeout
Actions
CephFS -
Bug #65733
: mds: upgrade to MDS enforcing CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK with client having root_squash in any MDS cap causes eviction for all file systems the client has caps for
Actions
CephFS -
Bug #65766
: qa: perm denied for runing find on cephtest dir
Actions
CephFS -
Bug #65782
: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks)
Actions
CephFS -
Bug #65801
: mgr/snap_schedule: restrict retention spec multiplier set
Actions
CephFS -
Bug #65802
: Quiesce and rename aren't properly syncrhonized
Actions
CephFS -
Bug #65803
: mds: some asok commands wait with asok thread blocked
Actions
Bug #65805
: common/StackStringStream: update pointer to newly allocated memory in overflow()
Actions
CephFS -
Bug #65820
: qa/tasks/fwd_scrub: Traceback in teuthology.log for normal exit condition
Actions
CephFS -
Bug #65823
: qa/tasks/quiescer: dump ops in parallel
Actions
CephFS -
Bug #65829
: qa: qa/suites/fs/functional/subvol_versions/ multiplies all jobs in fs:function by 2
Actions
CephFS -
Bug #65837
: qa: dead job from waiting to unmount client on deliberately damaged fs
Actions
CephFS -
Bug #65841
: qa: dead job from `tasks.cephfs.test_admin.TestFSFail.test_with_health_warn_oversize_cache`
Actions
CephFS -
Bug #65846
: mds: "invalid message type: 501"
Actions
CephFS -
Bug #65851
: MDS Squid Metadata Performance Regression
Actions
CephFS -
Bug #65858
: ceph.in: make `ceph tell mds.<fsname>:<rank> help` give help output
Actions
rgw -
Bug #65866
: reef: cannot build arrow with CMAKE_BUILD_TYPE=Debug
Actions
CephFS -
Bug #65895
: mgr/snap_schedule: correctly fetch mds_max_snaps_per_dir from mds
Actions
CephFS -
Bug #65971
: read operation hung in Client::get_caps(Same case as issue 65455)
Actions
CephFS -
Bug #65976
: qa/cephfs: mon.a (mon.0) 1025 : cluster [WRN] application not enabled on pool 'cephfs_data_ec'" in cluster log
Actions
CephFS -
Bug #65977
: Quiesce times out while the ops dump shows all existing quiesce ops as complete;
Actions
CephFS -
Bug #66003
: mds: session reclaim could miss blocklisting an old session
Actions
Bug #66005
: pybind/mgr: allow disabling always on modules (volumes, etc..)
Actions
CephFS -
Bug #66009
: qa: `fs volume ls` command times out waiting for fs to come online
Actions
CephFS -
Bug #66014
: mds: Beacon code can deadlock messenger
Actions
CephFS -
Bug #66029
: qa: enable debug logs for fs:cephadm:multivolume subsuite
Actions
CephFS -
Bug #66030
: dbench.sh fails with Bad file descriptor (fs:cephadm:multivolume)
Actions
CephFS -
Bug #66031
: qa: add human readable FS_DEGRADED to ignore list
Actions
CephFS -
Bug #66048
: mon.smithi001 (mon.0) 332 : cluster [WRN] osd.1 (root=default,host=smithi001) is down" in cluster log
Actions
CephFS -
Bug #66049
: qa, tasks/nfs: client.15263 isn't responding to mclientcaps(revoke), ino 0x1 pending pAsLsXs issued pAsLsXsFs
Actions
CephFS -
Bug #66098
: qa: add DaemonWatchdog to cephadm.py
Actions
CephFS -
Bug #66099
: qa: fwd_scrub does not stop during unwind
Actions
CephFS -
Fix #62712
: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
Actions
CephFS -
Fix #63432
: qa: run TestSnapshots.test_kill_mdstable for all mount types
Actions
nvme-of -
Fix #64821
: cephadm - make changes to ceph-nvmeof.conf template
Actions
CephFS -
Fix #64984
: qa: probabilistically ignore PG_AVAILABILITY/PG_DEGRADED
Actions
CephFS -
Fix #65408
: qa: under valgrind, restart valgrind/mds when MDS exits with 0
Actions
CephFS -
Fix #65579
: mds: use _exit for QA killpoints rather than SIGABRT
Actions
CephFS -
Fix #65617
: qa: increase debugging for snap_schedule
Actions
CephFS -
Feature #7320
: qa: thrash directory fragmentation
Actions
CephFS -
Feature #10679
: Add support for the chattr +i command (immutable file)
Actions
CephFS -
Feature #18154
: qa: enable mds thrash exports tests
Actions
CephFS -
Feature #41824
: mds: aggregate subtree authorities for display in `fs top`
Actions
CephFS -
Feature #44279
: client: provide asok commands to getattr an inode with desired caps
Actions
CephFS -
Feature #45021
: client: new asok commands for diagnosing cap handling issues
Actions
CephFS -
Feature #47264
: "fs authorize" subcommand should work for multiple FSs too
Actions
CephFS -
Feature #48509
: mds: dmClock based subvolume QoS scheduler
Actions
CephFS -
Feature #48704
: mds: recall caps proportional to the number issued
Actions
RADOS -
Feature #54525
: osd/mon: log memory usage during tick
Actions
CephFS -
Feature #55214
: mds: add asok/tell command to clear stale omap entries
Actions
CephFS -
Feature #55414
: mds:asok interface to cleanup permanently damaged inodes
Actions
CephFS -
Feature #56428
: add command "fs deauthorize"
Actions
CephFS -
Feature #56442
: mds: build asok command to dump stray files and associated caps
Actions
CephFS -
Feature #56489
: qa: test mgr plugins with standby mgr failover
Actions
CephFS -
Feature #57481
: mds: enhance scrub to fragment/merge dirfrags
Actions
CephFS -
Feature #58193
: mds: remove stray directory indexes since stray directory can fragment
Actions
CephFS -
Feature #58488
: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
Actions
CephFS -
Feature #61334
: cephfs-mirror: use snapdiff api for efficient tree traversal
Actions
CephFS -
Feature #61777
: mds: add ceph.dir.bal.mask vxattr
Actions
CephFS -
Feature #61863
: mds: issue a health warning with estimated time to complete replay
Actions
CephFS -
Feature #61903
: pybind/mgr/volumes: add config to turn off subvolume deletion
Actions
CephFS -
Feature #61904
: pybind/mgr/volumes: add more introspection for clones
Actions
CephFS -
Feature #61905
: pybind/mgr/volumes: add more introspection for recursive unlink threads
Actions
CephFS -
Feature #62157
: mds: working set size tracker
Actions
CephFS -
Feature #62207
: Report cephfs-nfs service on ceph -s
Actions
CephFS -
Feature #62668
: qa: use teuthology scripts to test dozens of clients
Actions
CephFS -
Feature #62715
: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
Actions
CephFS -
Feature #62849
: mds/FSMap: add field indicating the birth time of the epoch
Actions
CephFS -
Feature #62856
: cephfs: persist an audit log in CephFS
Actions
CephFS -
Feature #63191
: tools/cephfs: provide an estimate completion time for offline tools
Actions
CephFS -
Feature #63374
: mds: add asok command to kill/respond to request
Actions
CephFS -
Feature #63544
: mgr/volumes: bulk delete canceled clones
Actions
CephFS -
Feature #63663
: mds,client: add crash-consistent snapshot support
Actions
CephFS -
Feature #63664
: mds: add quiesce protocol for halting I/O on subvolumes
Actions
CephFS -
Feature #63665
: mds: QuiesceDb to manage subvolume quiesce state
Actions
CephFS -
Feature #63666
: mds: QuiesceAgent to execute quiesce operations on an MDS rank
Actions
CephFS -
Feature #63668
: pybind/mgr/volumes: add quiesce protocol API
Actions
CephFS -
Feature #63669
: qa: add teuthology tests for quiescing a group of subvolumes
Actions
CephFS -
Feature #63670
: mds,client: add light-weight quiesce protocol
Actions
CephFS -
Feature #63928
: cephfs_mirror: Enable support for cephfs_mirror in consolidation/archive configurations
Actions
CephFS -
Feature #64101
: tools/cephfs: toolify updating mdlog journal pointers to a sane value
Actions
CephFS -
Feature #64506
: qa: update fs:upgrade to test from reef/squid to main
Actions
CephFS -
Feature #64507
: pybind/mgr/snap_schedule: support crash-consistent snapshots
Actions
CephFS -
Feature #64531
: mds,mgr: identify metadata heavy workloads
Actions
nvme-of -
Feature #64777
: mon: add NVMe-oF gateway monitor and HA
Actions
nvme-of -
Feature #65259
: cephadm - make changes to ceph-nvmeof.conf template
Actions
Orchestrator -
Feature #65338
: Add --continue-on-error for `cephadm bootstrap`
Actions
CephFS -
Feature #65503
: mgr/stats, cephfs-top: provide per volume/sub-volume based performance metrics to monitor / troubleshoot performance issues
Actions
nvme-of -
Feature #65566
: Change some default values for OMAP lock parameters in nvmeof conf file
Actions
CephFS -
Feature #65637
: mds: continue sending heartbeats during recovery when MDS journal is large
Actions
Feature #65747
: common/admin_socket: support saving json output to a file local to the daemon
Actions
rgw -
Feature #65769
: rgw: make incomplete multipart upload part of bucket check efficient
Actions
rgw -
Feature #66104
: rgw: add shard reduction ability to dynamic resharding
Actions
CephFS -
Cleanup #4744
: mds: pass around LogSegments via std::shared_ptr
Actions
CephFS -
Cleanup #61482
: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces
Actions
CephFS -
Cleanup #65689
: mds: move specialized cleanup for fragment_dir to MDCache::request_cleanup
Actions
CephFS -
Cleanup #65690
: mds: move specialized cleanup for export_dir to MDCache::request_cleanup
Actions
CephFS -
Tasks #62159
: qa: evaluate mds_partitioner
Actions
CephFS -
Tasks #63707
: mds: AdminSocket command to control the QuiesceDbManager
Actions
CephFS -
Tasks #63708
: mds: MDS message transport for inter-rank QuiesceDbManager communications
Actions
rgw -
Documentation #49649
: add information on the system objects holding notifications
Actions
CephFS -
Documentation #61375
: doc: cephfs-data-scan should discuss multiple data support
Actions
CephFS -
Documentation #61377
: doc: add suggested use-cases for random emphemeral pinning
Actions
CephFS -
Documentation #61902
: Recommend pinning _deleting directory to another rank for certain use-cases
Actions
CephFS -
Documentation #62605
: cephfs-journal-tool: update parts of code that need mandatory --rank
Actions
CephFS -
Documentation #63885
: doc: add dedicated section discussing a "damaged" rank
Actions
CephFS -
Documentation #64483
: doc: document labelled perf metrics for mds/cephfs-mirror
Actions
CephFS -
Documentation #65881
: Refer to the disaster recovery and backup consistency as the primary rationale for the subvolume quiesce
Actions
Also available in:
TXT
Loading...