Configuring the Object Store 2.1.0 Client

Describes how to configure the client and provides client configuration and operation examples.

MEP 7.1.0 and later supports Object Store 2.1.0. The Object Store (MinIO) client 2.1.0 is located in /opt/mapr/objectstore-client/objectstore-client-2.1.0/util/mc.

Before you complete the tasks described in the following sections, complete the tasks in Configuring Object Store with S3-Compatible API to configure Object Store.

Add an S3-Compatible Service

To add an S3-compatible service, run the mc alias set command, as shown:
mc alias set ALIAS HOSTNAME ADMIN_ACCESS_KEY ADMIN_SECRET_KEY
Example
mc alias set myminio http://localhost:9000 minioadmin minioadmin

Create a User

You can create a user with or without a UID and GID. When you create a user without a UID and GID, the UID and GID for the Object Store process is used.
  • Creating a user without the UID and GID
    Run the mc admin user add command, as shown:
    mc admin user add ALIAS USERNAME PASSWORD
    Example
    mc admin user add myminio test qwerty78
  • Creating a user with the UID and GID
    Run the mc admin user add command with the UID and GID, as shown:
    mc admin user add ALIAS/ USERNAME PASSWORD UID GID
    Example
    mc admin user add myminio test qwerty78 1000 1000

For more information, see MinIO Admin Complete Guide.

Examples

After you configure the client, you can perform bucket and object operations in S3 through Java, Python, Hadoop, and Spark.

This section provides examples with built-in users. In the examples, the default admin user, minioadmin, is used.

The examples demonstrate how to perform the following tasks:
  • list buckets
  • create a bucket
  • delete a bucket
  • check that a bucket exists
  • list files
  • upload a file
  • delete a file
  • check that a file exists
Java Example
See Java Examples.
Python Example
See Python Examples.
Hadoop Example
For Hadoop, provide the accessKey, secretKey, host, and port, as shown:
-Dfs.s3a.access.key=ACCESS_KEY -Dfs.s3a.secret.key=PASSWORD -Dfs.s3a.endpoint=http(s)://HOST:PORT -Dfs.s3a.path.style.access=true -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
Spark Example
Since Spark uses Hadoop libraries to work with S3, you must provide the accessKey, secretKey, host, and port, as shown:
--conf spark.hadoop.fs.s3a.access.key=ACCESS_KEY --conf spark.hadoop.fs.s3a.secret.key=PASSWORD --conf spark.hadoop.fs.s3a.endpoint=http(s)://HOST:PORT --conf spark.hadoop.fs.s3a.path.style.access=true --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem