The Application Persistent Storage tab enables the Platform Administrator to manage
the external persistent storage pool used when migrating containers between hosts in
HPE Ezmeral Container Platform deployments that implement EPIC only.
The Application Persistent Storage tab of the
Systems Settings screen (see The System Settings Screen)
enables Platform Administrators to connect to an external storage pool that will be
used to store crucial container folders and enable migrating containers between
hosts. You may create, expand, and shrink this external storage resource just like
any other storage resource.
This topic applies to HPE Ezmeral Container Platform deployments that
implement EPIC only.
All of the following criteria must be met in order to migrate containers from host to
host:
- You must configure an external storage resource for use.
- The configured external storage resource must be mapped on this tab for use as a
persistent storage pool.
- One or more flavors must be created that include a specified amount of
persistent storage, in GB. See Creating
a New Flavor.
- The selected external persistent storage resource must have enough free space to
support the flavor-defined per-container persistent storage allocation times the
number of containers using that flavor. For example, if Flavor_A specifies 20GB
of persistent storage and there are 25 containers using that flavor while
Flavor_B specifies 30GB of persistent storage and there are 15 containers using
that flavor, then you must specify a persistent storage pool that has at least
950GB of free space, which is (20*25)+(30*15)=500+450=950.
- Containers must be created using a flavor with an allocated persistent storage
amount. See Creating a New
Cluster, Creating a
New Training Cluster, Creating a New Notebook
Cluster, or Creating a New Deployment Cluster.
- The deployment must have enough resources to support the migration, including
all placement constraints.
See the following for more information about migrating containers:
To map an external storage pool for use as persistent storage:
- Use the Select Type pull-down menu to select the
filesystem used by the external resource, and then enter the appropriate
parameters based on your selection. The available options are:
- None: Select this option to disable persistent
storage.
- Local MapR: This option is only available for
on-premises deployments. See Local MapR.
- CEPH RBD: This option is only available for
on-premises deployments. See CEPH
RBD.
- NFS: This option is only available for
on-premises deployments. See NFS.
- ScaleIO: This option is only available for
on-premises deployments. See ScaleIO.
CAUTION:
Changing the persistent storage pool settings will
cause containers that are currently using persistent storage to become
ineligible for migration.
- Click Submit to save your changes.
Local MapR
If you selected Local MapR using the Select Type pull-down menu,
then you may either accept the defaults or edit as needed:
-
Mount Path:
/opt/bluedata/mnt
-
Volume Path:
/hcp/pers_stor
-
Num replicas:
1
CEPH RBD
If you selected CEPH RBD using the Select Type pull-down menu, then
enter the following information:
- Monitors: Comma-separated list of the IP addresses or
hostnames of the CEPH RBD monitors.
- Pool Name: Name of the CEPH RBD pool that was created for
use by HPE Ezmeral Container Platform.
- Client Name: Client name, if CEPHX authentication has been enabled on the
persistent storage pool.
- Client Key: Client key associated with the client name, if CEPHX
authentication has been enabled on the persistent storage pool.
See Working with CEPH Pools and Users
for additional information about basic CEPH operations.
NFS
If you selected NFS using the Select Type pull-down menu, then enter
the following information:
- Server: IP address or hostname of the NFS server.
- Share: NFS share.
ScaleIO
Before selecting Scale IO using the Select
Type pull-down menu, verify that the ScaleIO client (SDC) is
properly installed on all HPE Ezmeral Container Platform hosts, as
described in the EMC Scale IO Installation Guide (link
opens an external website in a new browser tab/window).
If any host is running RHEL/CentOS 7.5, then apply the following workaround on the
affected hosts:
- Edit
/etc/default/grub by adding nokaslr
at the end of the options line in the GRUB_CMDLINE_LINUX entry.
- Execute the command
grub-2-mkconfig -o
/boot/grub2/grub.cfg on each affected host.
- Reboot the affected hosts.
Once you have verified proper configuration, enter the following information in the
Application Persistent Storage tab:
- Gateway: IP address or hostname of the ScaleIO gateway.
- Port: Port number on the ScaleIO gateway.
- Protection Domain: ScaleIO protection domain.
- Storage Pool: Name of the storage pool.
- Storage Mode: Use the pull-down menu to select either Thin
Provisioned or Thick Provisioned, as appropriate.
- MDM Username: ScaleIO MDM username.
- MDM Password: ScaleIO MDM password.