Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
The Internet of Things is revolutionizing how we do business. GE monitors its jet engines remotely to identify potential breakdowns before they happen. Homeowners use Google’s Nest thermostat to control their air conditioning remotely. And Barcelona’s city government collects real-time traffic information to help manage parking and reduce congestion in the city.
The IoT allows companies to gather crucial data from remote sensors and chips placed in products, cars, supply chains, and even a customer’s smartphone. This information helps companies improve their products and services and thus their bottom lines. However, the IoT revolution is also generating an explosion of data for which few companies are prepared. Gartner forecasts that the IoT will grow to include 8.4 billion objects worldwide this year, up 31 percent from 2016. That number is expected to exceed 20 billion by 2020.
These objects generate far more data than traditional software applications. The storage infrastructure that most companies currently rely on simply cannot keep up with the coming wave of data, which includes unstructured data, machine data, media information, and object-based data. Many companies will be overwhelmed by it. As it is, the typical data platform is already challenged and overloaded, yielding sub-optimal performance and efficiency.
Scaling these outmoded platforms to keep up with rapid data growth is out of the question. Why? Most companies today use patchwork storage systems built from multiple individual projects over time. The result is a complex collection of applications, each with its own claims on available storage, tied together in a way that is often highly inefficient. It is costly and complicated for companies to manage these overloaded, fragmented storage resources, especially when they have multiple applications and users. And it only gets worse as new data sources flood available storage resources and companies need to serve users spread out around the globe.
Unexpected—and costly—downtime is a far too common by-product of such legacy architecture. As a result, many companies spend an inordinate amount of time and effort attempting to manage, protect, and back up their data. Often, they cope by simply adding more storage—which can lead to further storage management headaches—and adding more people to manage it all. Operational expenses continue to rise, while performance continues to deteriorate.
Companies must modernize their data infrastructure to prepare for the coming tsunami of IoT data. Instead of a patchwork approach, companies should employ a comprehensive data management strategy that takes their workload and data needs into account and includes automated processes to manage, provision, back up, and protect data. This approach yields faster applications, simplified infrastructures, and reduced costs.
Where to start? Throwing additional storage at the problem is not the answer—in fact, it usually exacerbates it. The transformation to a modern data infrastructure begins with envisioning storage optimization for each particular enterprise. Customers understand their own data storage needs best. That's why HPE Pointnext experts work with each customer to create a bespoke transformation roadmap. The goal is to create a modern, integrated storage system with zero disruptions to a company’s business applications along the way.
New kinds of storage—faster, more efficient all-flash storage in particular—are a key aspect of this modernization. All-flash storage can provide a real boost to the performance of business applications. But merely layering faster storage on top of an enterprise’s current architecture is the wrong approach. Instead, all-flash storage must be integrated seamlessly into the existing infrastructure. Companies must identify mission-critical applications that need the speed and performance of flash storage. They also need to prioritize workloads, processes, and applications for migration to the all-flash tier. This triage frees up existing storage resources for other workloads, including second-tier data that is accessed less frequently.
Many companies need to maintain multiple data centers in far-flung locations. They need fast and reliable access to these centers, with syncing between them. This requires a geographically distributed data repository and the ability to access data on public and private clouds. Traditionally, most companies have operated a main data center along with secondary facilities. This architecture is becoming outmoded and needs to change. A better design entails a distributed system in which applications can leverage storage resources from any data center, regardless of location.
Automation is critical to a modern data architecture. According to market research firm IDC, 30 percent of the typical company’s IT budget is spent on storage. Yet the bulk of that money—about 85 percent—is spent on managing copies, often a manual process that increases head count and operational expenses while reducing quality.
Truly digital organizations need to implement software-defined data management and automation. This reduces costs and also facilitates the integration of far-flung data centers. In addition, data analytics can reduce the overall cost of data storage by identifying redundant, obsolete, and trivial data and then removing it from primary storage to be managed elsewhere.
Companies also need the right software tools to map infrastructure and workloads that manage provisioning and configuration, and then factor in application requirements and infrastructure topology. Modernization also means incorporating data management, migration, protection, monitoring, analytics, and operations. Crucially, a modern storage infrastructure must prioritize data recoverability and business continuity.
Infrastructure that is fragmented into silos increases the risk of unplanned, unexpected, and costly downtime. HPE Pointnext ensures end-to-end data protection by making recoverability and business continuity part of the infrastructure design. That means designing a tiered infrastructure for resiliency and long-term data protection. The next step is policies that define backup and recovery strategies to protect all applications and business processes from interruptions, and to protect all data from loss or corruption. This requires organizations to plan and test for disaster recovery.
The last step in the modernization process is education. Once HPE Pointnext has completed the phased transformation required to modernize an enterprise’s data storage architecture, it educates the customer about how to operate and oversee the new system.
The end result is a modern infrastructure with storage resources that can be mapped dynamically to different workloads, applications, and uses. This architecture is managed and orchestrated by software, with minimal manual intervention. Data is protected, while downtime and business process interruptions are minimized. It all combines to boost efficiency, reduce complexity, and slash costs.
FNZC had a problem. The New Zealand brokerage and investment banking firm relied on an aging storage architecture that was inefficient, slow, and difficult to manage, with constant network delays. The backup technology was even older, which threatened the company’s ability to recover and restore data in a timely manner.
“Our goal was to find a simplified solution with a better management approach. [We needed] a game-changing solution that could future-proof us for the next five years,” says John Sew Hoy, the company’s infrastructure architect. Additional goals included saving money and reducing the firm’s carbon footprint.
After researching different solution providers, FNZC turned to Hewlett Packard Enterprise to modernize its storage platform. HPE delivered a single, high-performance backup architecture with all-flash storage, high-performance servers and backups, and virtual desktops for users. The simplified system allows the bank to access its two data centers in Auckland and Wellington via a single, user-friendly application. Previously, employees had to connect with one or the other.
The benefits have been substantial. Updating and maintaining desktop systems, a process that once took an hour, was reduced to 15 minutes, significantly reducing downtime for users. A data load exercise that previously took four hours was cut to 45 minutes, freeing up engineers to focus on other pieces of infrastructure rather devoting hours at a time to storage.
Non-technical FNZC employees also saw big benefits. They say they’re now enjoying a better experience with faster response speeds, including data delivery that’s up to 14 times faster than with the old system. Applications are reading and writing data to storage 18 times faster than before. That's an especially big deal in finance, where speed and accuracy are everything.
The firm’s carbon footprint has shrunk considerably as well. Instead of the 17 servers and 59 disk drives that FNZC’s previous system required, the HPE solution uses just two server racks and 12 disk drives. The reduced carbon footprint is delivering solid cost savings in power consumption and cooling requirements.
Today, FNZC enjoys faster data delivery and backups, more secure data, easier maintenance, and energy and cost savings. “It just works,” says Sew Hoy.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.