Object Store with S3-Compatible API

The Object Store with S3-Compatible API provides an S3 gateway to data in HPE Ezmeral Data Fabric. As of MEP 6.0.0, the Object Store with S3-Compatible API is included in MEP repositories.

The Object Store with S3-Compatible API stores data generated through multiple data protocols, such as NFS, POSIX, S3, and HDFS. The Object Store stores data objects in buckets in the form of files and folders. Files correlate with data objects and folders correlate with buckets. A data object can be of any data type, but the data object must have a unique name as part of the S3-compatible API call. The data objects are grouped into a logical container called a bucket.

Data in the Object Store is accessible through S3 API requests. The Object Store manages all inbound S3 API requests to store data in or retrieve data from an HPE Ezmeral Data Fabric cluster. The S3 API requires metadata to fulfill requests. Metadata is generated through the S3 API the first time the S3 API accesses a file.

You can use the S3-compatible API to create, list, or delete a bucket. You can also use it to get, put, list, or delete a data object within a bucket.

The Object Store also supports object notification through HPE Ezmeral Data Fabric Event Store. See Using the HPE Ezmeral Data Fabric Event Store for S3 Bucket Event Notifications.

The following image depicts an inbound S3 API request from a web application in the cloud to the HPE Ezmeral Data Fabric:

How Object Tiering Differs from the Object Store

The Object Tiering feature leverages its own outbound version of the S3 API to directly store archived data in the cloud. For more information about object tiering, see Data Tiering.

The following image depicts the outbound S3 API request to archive data in the cloud:

S3 Deployment Mode

The Object Store only supports Amazon S3 standalone deployment mode because each instance of the Object Store can only interact with one bucket or set of buckets at a time.

When you use the S3 API in standalone mode, each Object Store instance must have its own back-end directory in the data-fabric file system. You can either map a volume mount point to the directory or use the directory path itself. An S3 instance exclusively uses the allocated directory or volume in the data-fabric filesystem to serve an exclusive set of buckets.

If you need to migrate buckets to another S3 instance, you can move or copy the buckets to another directory or volume. See AWS CLI. If a bucket does not exist, an application can create a bucket through any of the S3 gateways; however, the bucket created will only be served through that gateway.

The following deployment scenario shows one Object Store per cluster that supports multiple applications and multiple buckets with bucket sharing.

This scenario is useful when you want an application to access multiple buckets without knowing about bucket locations beforehand. The single Object Store instance serves all requests without the need to partition any buckets.

The following deployment scenario shows two instances of the Object Store in a cluster that supports multiple applications and buckets with bucket sharing. Note that bucket sharing across S3 instances is not supported.

Authorization to Access Data

By default, the Object Store provides a two-tier authorization model that starts with an S3 bucket policies check at the S3 REST API level, followed by a file permissions check on the HPE Ezmeral Data Fabric.

The following image shows the two tiers of authorization:

When a Object Store instance receives a request from a tenant to access a bucket or object, it first checks for bucket policies that reference that particular tenant. If the tenant does not have access via the bucket policy, the request fails and no other checks are performed.

If the tenant has access via the bucket policy, the data-fabric filesystem performs the next check using the mapped UID and GID credentials for the tenant.

Configuring the MapR Object Store with S3-Compatible API describes how to modify the type of authorization, configure tenants and credentials, and secure data.

High Availability (HA)

Object Store 2.0.0 and later supports working from several instances with the same mount folder on different nodes. If several instances use the same mount folder on different nodes, caching in FUSE should be disabled. For more information, see Limitations of the Object Store with S3-Compatible API.

To implement HA, you must configure a load balancer in front of the object store instances and make the corresponding changes in the /opt/mapr/objectstore-client/objectstore-client-<version>/conf/minio.json file for all instances.

If a multi-part upload feature is used, the load balancer must hash the request’s source and always send requests from each unique host to the same node.

After any administrative change to the Object Store configuration using the MinIO client ( for example, to add new users, groups, policies, or notifications), you must restart all instances manually to avoid collisions in behaviour on different instances.