Explains how to remove disks using either the Control System or the CLI.
When you remove a disk from the filesystem, other disks in the storage pool are also removed automatically from the filesystem and are no longer in use (they are available but off-line). Their disk storage goes to 0%, and they are eligible to be added again to the filesystem to build a new storage pool. You can either replace the disk and re-add it along with the other disks that were in the storage pool, or just re-add the other disks if you do not plan to replace the disk you removed. See Adding Disks to filesystem for more information.
maprcli disk
remove command without the -force 1 option first and
examine the warning messages to make sure you are not removing the disk with
Container ID 1. To safely remove such a disk, perform a CLDB Failover to make one of the other CLDB nodes the primary CLDB, then remove the disk as
normal with addition of the -force 1 option. /opt/mapr/server/fsck utility
before removing or replacing disks. /opt/mapr/server/fsck utility with the -r
flag to repair a filesystem risks data loss. Call data-fabric support before using
/opt/mapr/server/fsck -r.Complete the following steps to remove disks using the Control System:
When you replace a failed disk, add it back to the filesystem along with the other disks from the same storage pool that were previously removed. Adding only the replacement disk to the filesystem, results in a non-optimal storage pool layout, which can lead to degraded performance.
Once you add the disks to the filesystem, the cluster automatically allocates properly-sized storage pools. For example, if you add ten disks, dHPE Ezmeral Data Fabric allocates two storage pools of three disks each and two storage pools of two disks each.