filesystem exposes the HDFS API; if you already have a client program built to use
libhdfs, you do not have to relink your program just to access the
MapR filesystem. However, re-linking to the MapR-specific shared library
libMapRClient.so will give you better performance on MapR filesystem,
because it does not make any Java calls to access the filesystem (unlike
libhdfs.so).
The script below sets environment variables to necessary values and compiles one of the sample applications. Use this script as an example for building and launching your own applications.
When you set HADOOP_HOME for your own client applications, set it to the path for the
version of libhdfs that your application uses. The path is:
MAPR_HOME/hadoop/hadoop-2.x/
Also, set the path to your application in the gcc command, of
course.
This script assumes that MAPR_HOME is set to the default value of
/opt/mapr.
#!/bin/bash
#Setup environment
export HADOOP_HOME=/hadoop/hadoop-2.7.0/
export LD_LIBRARY_PATH=:/lib/native/
export LD_RUN_PATH=:/lib
GCC_OPTS="-Wl,--allow-shlib-undefined -I. -I{$HADOOP_HOME}/include/"
#Compile and Link
gcc src/c++/libhdfs/hdfs_read.c -o hdfs_read -L/lib -lMapRClient
#Launch the application
./hdfs_read
libMapRClient is statically linked
to the following third-party libraries:libcryptoapp.a (v5.6.2)libprotobuf-lite.a (v2.5.0)