Big Data Server Storage Ecosystem Solutions

Choosing the right infrastructure for your data-driven organization

Accelerate your time to data value

Data generation and collection is growing exponentially. Customers in industries of all kinds are analyzing and mining these growing collections of Big Data to unlock the insights that will allow them to streamline their operations and reduce costs, target products and services more efficiently and effectively to customers who need them, and build the next generation of products and services to satisfy unmet needs ahead of competition.

The HPE Big Data Server Storage Ecosystem brings together the solution expertise of the ecosystem partners with HPE experience and HPE Apollo Systems and ProLiant servers to enable customers to get the maximum value from their data with proven, tested, complete solutions.

HPE has optimized Apollo server storage systems for the new technologies that driving the Big Data revolution—Hadoop and NoSQL-based analytics, object storage solutions, and the convergence of High Performance Computing (HPC) and Big Data.

Object Storage

Low cost/GB, density-optimized solutions for object storage systems at scale such as collaboration, content distribution, content repositories, active archives, back-up repositories and cold storage, and everything in between.

Why do we care?

Object Storage Solutions are:

  • Ideal for – Big Data files of all types that are created and accessed but rarely modified
  • Scalable – virtually limitless capacity up to 100's of petabytes and beyond. The performance scales nearly linearly as the cluster grows.
  • Available – from anywhere, anytime, on any device. Multi-location availability up to as many “9's” as needed.
  • Secure – multi-tenant by design with adjustable security per user. Erasure coding protection.
  • Efficient – delivers much lower TCO for Big Data at petabyte scale than traditional file system solutions, saving both CAPEX and OPEX.
  • Simple – uses RESTful APIs allowing petabyte scale flat addressable name space for the growing volume and variety of Big Data file types.

Hadoop

Gain business insight from your Big Data with analytics based on parallel Hadoop data mining to developing a 360-degree view of customers, improve Web commerce sales, increase customer retention and satisfaction and much more. Or analyze volumes of machine-generated data to streamline and automate operations to improve efficiency and profitability.

Why do we care?

Hadoop-based Big Data analytics solutions:

  • Cutting edge tool – draw conclusions from mixed data that were previously unavailable.
  • Driving the Big Data revolution – fastest growing data management system since RDBMS.
  • Business insight – find the connections across different, unstructured data sources (such as customer purchases, online shopping, social media).
  • At the scale of big data – cost-effective analytics in parallel; add more storage servers to grow capacity and increase performance.
  • Data driven efficiency – harness machine generated data and monitoring to test and automate responsiveness.

HPC and Big Data Convergence

Big Data analytics are becoming more compute-intensive. High Performance Computing is working with much larger volumes of data. The challenges and the solutions are beginning to have many of the same elements. Is it Big Data or is it HPC? Ultimately, it will not matter if the solution solves the problem and deals effectively with scale and the need to do much more – with less.

Why do we care?

  • HPC needs bigger data – from genomics life sciences to real time climate forecasting to complex system product design HPC needs to deal with bigger and bigger data sets faster and with more iterations.
  • Big Data needs HPC – Hadoop-based analytics are now going where they need the high-speed storage-to-compute fabrics and high performance processing like fast CPUs, computational accelerators, and high-speed I/O common in HPC.
  • New architectures and faster progress – where HPC and Big Data meet. The most advanced discovery is promising to break through previous bottlenecks to cost-effective scaling and resource utilization. This will provide the roadmap to take on the next generation of simulation and analytics complexity.

Case studies and solution briefs

BIGLOBE constructs lower-cost, high-density scalable storage environment

As a leading Japanese internet and cloud service provider, BIGLOBE is migrating to a more robust mail storage system for their 3 million broadband clients. In anticipation of customers’ increasing storage needs, BIGLOBE chose the HPE ProLiant SL4540 Gen8 Server powered by the Intel® Xeon® processor E5-2400 product family running the Scality RING software package. BIGLOBE achieved performance and reliability to manage unstructured data in the long term while reducing costs by 2/3 in comparison to conventional storage environments.

RTL II shifts video archive into hyperscale with HPE and Scality

With its active video archive growing 200 terabytes per year, German TV station RTL II needed a more efficient way to scale. By deploying HPE ProLiant SL4540 Gen8 Servers with Scality RING object storage software, the station gained petabyte-level scalability while accelerating video transfer speeds tenfold, from 60 – 70 minutes to just 6.5 minutes. This allows the station to deliver the most up-to-date material for broadcast and meet growing consumer demand for more online content.

 

Store Big Data at petabyte scale

Machine, application, and user-generated data is quickly expanding to petabytes and exabytes in size—making it difficult to store, process, and access data using traditional databases and storage systems. HPE ProLiant Servers with Scality RING is a software-defined storage platform that was designed for today’s petabyte-scale object storage needs. It uses a distributed, shared-nothing architecture, with built-in tiering for flexible storage configuration, and delivers low latency, unsurpassed resilience and high-performance data access.