Describes how to manually set up cross-cluster NFS access.
HPE Ezmeral Data Fabric-NFS offers many usability and interoperability advantages to the customer, and makes big data radically easier and less expensive to use. In a secure environment, however, you must configure NFS carefully because the NFS protocol is inherently insecure. Running the NFS server on any cluster node might expose the filesystem to be world readable and writeable to any machine that knows the IP address of the cluster node running the NFS server and has access to the network, regardless of the permissions, passwords and other security mechanisms. At the minimum, you should configure iptables firewall rules for all the cluster nodes where the NFS server is running, to restrict incoming NFS traffic to authorized client IP addresses.
Configuring cross-cluster NFS access might expose the entire filesystem of the other
cluster to be world readable and writeable as well. Therefore, automated configuration for
cross-cluster NFS access is not available with the configure-crosscluster.sh utility. You should manually configure
cross-cluster NFS access only if you are fully aware of the security risks, and taken
appropriate steps to mitigate the risks by securing both your NFS gateway, and incoming
client traffic.
For this method, configure cross-cluster NFS security for the NFS gateway on one cluster, so that the NFS client can mount the mapr filesystem once from the NFS gateway, and then access the file systems for both clusters.
For this method, cross-cluster NFS configuration is not needed. The NFS client can mount the HPE Ezmeral Data Fabric filesystem individually for each cluster. This method requires that the NFS gateway to be run on each cluster, and the client performs one NFS mount for each NFS filesystem to be accessed.
The following procedure describes how to setup NFS for the first method: