Describes how to copy files from one data-fabric cluster to another using NFS for the HPE Ezmeral Data Fabric.
If NFS for the HPE Ezmeral Data
Fabric is installed on the data-fabric cluster, you can mount the data-fabric cluster to the HDFS cluster and then copy files from one
cluster to the other using hadoop distcp. If you do not have NFS for the HPE Ezmeral Data Fabric
installed and a mount point configured, see Accessing Data with NFS v3 and Managing the HPE Ezmeral Data Fabric NFS Service.
<MapR NFS Server> - the IP address or hostname of the NFS server in
the data-fabric cluster<maprfs_nfs_mount> - the NFS export mount point configured on the
data-fabric cluster; default is
/mapr<hdfs_nfs_mount> - the NFS for the HPE Ezmeral Data Fabric mount point configured on the
HDFS cluster <NameNode> - the IP address or hostname of the NameNode in the HDFS
cluster<NameNode Port> - the port on the NameNode in the HDFS cluster <HDFS path> - the path to the HDFS directory from which you plan to
copy data <MapR filesystem path> - the path in the data-fabric cluster to which you plan to copy HDFS
datamount <Data Fabric NFS Server>:/<maprfs_nfs_mount> /<hdfs_nfs_mount>mount 10.10.100.175:/mapr /hdfsmounthadoop distcp hdfs://<NameNode>:<NameNode Port>/<HDFS path> file:///<hdfs_nfs_mount>/<MapR filesystem path>Example
hadoop distcp hdfs://nn1:8020/user/sara/file.txt file:///hdfsmount/user/sara
hadoop fs -ls /<MapR filesystem path>hadoop fs -ls /user/sara