Skip to content

Ceph FS / RBD

❗Installing conda and pip packages on all CephFS (shared) filesystems is strictly prohibited!

Ceph filesystems data use

Credit: Ceph data usage

General ceph grafana dashboard

Currently available storageClasses:

StorageClass Filesystem Type Region AccessModes Restrictions Storage Type
rook-cephfs CephFS US West ReadWriteMany Spinning drives with NVME meta
rook-cephfs-central CephFS US Central ReadWriteMany Spinning drives with NVME meta
rook-cephfs-east CephFS US East ReadWriteMany Mixed
rook-cephfs-south-east CephFS US South East ReadWriteMany Spinning drives with NVME meta
rook-cephfs-pacific CephFS Hawaii+Guam ReadWriteMany Spinning drives with NVME meta
rook-cephfs-haosu CephFS US West (local) ReadWriteMany Hao Su and Ravi cluster NVME
rook-cephfs-tide CephFS US West (local) ReadWriteMany SDSU Tide cluster Spinning drives with NVME meta
rook-ceph-block RBD US West ReadWriteOnce Spinning drives with NVME meta
rook-ceph-block-east RBD US East ReadWriteOnce Mixed
rook-ceph-block-south-east RBD US South East ReadWriteOnce Spinning drives with NVME meta
rook-ceph-block-pacific RBD Hawaii+Guam ReadWriteOnce Spinning drives with NVME meta
rook-ceph-block-tide RBD US West (local) ReadWriteOnce SDSU Tide cluster Spinning drives with NVME meta
rook-ceph-block-central (*default*) RBD US Central ReadWriteOnce Spinning drives with NVME meta

Ceph shared filesystem (CephFS) is the primary way of storing data in nautilus and allows mounting same volumes from multiple PODs in parallel (ReadWriteMany).

Ceph block storage allows RBD (Rados Block Devices) to be attached to a single pod at a time (ReadWriteOnce). Provides fastest access to the data, and is preferred for smaller (below 500GB) datasets, and all datasets not needing shared access from multiple pods.