Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Modernizing applications for the cloud experience everywhere

Practical guidelines to help you balance transformative investments while keeping the best of what you have.

A data-driven workload placement strategy that prioritizes investments based on business impact and feasibility can help maximize the success of your cloud transformation. Most organizations have made the easy moves to public cloud, usually for applications with fewer dependencies, lower data gravity, or less demanding security, performance, or governance requirements than other apps. But once they've made those easy migrations, they hit a wall.

Beyond these early movers, the cost, risk, and technical feasibility of a move to public cloud can become too great. The better solution―as defined by objectives like cost reduction, performance and availability, improved security posture, and risk mitigation―is to bring the cloud experience to those workloads.

Some organizations, however, find themselves stalled even in their initial modernization and migration steps. Unless you begin as a cloud-based company, without any legacy application constraints, you can find yourself operating with two cost and operating models. And the glaring truths become clear as the monthly invoices flow in for cloud services alongside legacy IT, which leads to increased total cost of ownership.

How can businesses avoid the trap of straddling both the old and new to achieve full digital transformation? Is there a predictable outcome if they follow a different path and approach? Can organizations do this with the same urgency that drove the initial rush for cloud experiences but achieve success with all their business applications?

Please read: Successful hybrid cloud projects require a detailed roadmap

The effort, duration, and opportunity cost of large-scale portfolio modernization and migration efforts can have a significant impact on an organization. It takes effort to discover completely; to clear out and distill the so-called noise; to maintain accurate, real-time data through each step of the lifecycle journey; to select the right priorities; to create momentum for the change; to leverage the right technologies; and, all the way through to assembling and motivating great staff, to achieve the goal.

Understand how your applications and IT services are related

The first step toward this understanding is business discovery. This involves creating a top-level view of what the business needs to operate, who owns what, how much capacity and room for growth you have, and how business services are being delivered. This process should not be about which vendors, infrastructure, hardware, and software have been selected. Rather, it should capture which functions the business needs from IT services to operate, including the volume and ability to scale up or down as circumstances dictate.

Once you have completed business discovery, you can move to application discovery, which involves mapping applications to business services. This is where you start gaining more granular visibility into the service-level agreements (SLAs) associated with a workload or application and the workload's relationship to the value chain of the business. The idea is to holistically capture and analyze application functional areas, the owners, lifecycle information, development or feature timelines, and roadmaps. You'll also want to discover how each functional component perceives the criticality of its assets at an individual level. Surprisingly, many organizations do not recognize the need to connect all of these components until a critical business service is no longer available.

Using containers and microservices

Software development has rapidly become all about containers, microservices, and other cloud-native techniques. They are popular for many reasons, one of which is the way they manage data, or state. Stateless applications neither read nor store data about their state. Microservices running in containers realize benefits from being stateless, as this enables scalability, security through isolation, continuity, and faster deployment times. But while statelessness is not a problem for simple web apps, enterprise applications frequently need to retrieve, process, and store data.

The modern enterprise app is built on containers that spin up, do their job, isolate runtime failures, and then spin down. This requires careful coordination with many layers of infrastructure and software services. You can see that the complexity mushrooms when any application can receive data that resides in persistent storage from one microservice, perform an operation on that data, and hand off to another microservice.

Please read: Flood of transient containers challenges network visibility and security

At the same time, enterprise DevOps teams are growing in leaps and bounds, and so are their storage requirements. More and more stateful workloads are run in containers as monolithic applications are refactored and new microservices-based applications are built and deployed. Persistent storage support for containers is a critical issue worth paying attention to because all of the various hardware and software pieces are finally falling into place and adoption is growing quickly. Containers and storage have to play nice in your environment or they will ultimately hold back your entire IT operation.

Fortunately, persistent storage support and state management for containerized applications have improved dramatically in recent years and continue to do so. By implementing container management and microservices technologies that support persistent storage and statefulness, you can bring cloud agility to enterprise apps while managing complexity and mitigating risk.

Pay attention to your data strategy

As you bring the cloud experience to your existing applications and data, you need to pay attention to where your data resides. Chances are that some of it is in a cloud or multiple clouds, and some of it is in structured, unstructured, and semi-structured sources on corporate servers. There may be several copies of some of the data. Administrators may create duplicate datasets or subsets because it's too risky to allow user access to the original dataset.

This complexity results from the lack of a comprehensive data strategy and can threaten companies by endangering the SLAs they have with customers and partners. When legitimate but stressful applications like machine learning and large, analytical queries are running, the afflicted enterprise cannot ensure that scheduled events will start and complete on schedule.

Please read: Data scientists take the mystery out of data fabrics

In contrast, a comprehensive data strategy makes it practical and affordable to run a multipurpose system that takes full advantage of the value of data, bringing useful applications (projects) into production in a timely manner. Analysts, developers, and data scientists are able to work with a comprehensive and consistent collection of data and add new data sources without breaking the bank or overwhelming IT. To achieve this, a data fabric must have certain important capabilities:

  • A global namespace: All data must be available through the single, consistent global namespace, whether it resides in on-premises IT or a public cloud or is distributed at the edge.

  • Multiple protocols and data formats: The data fabric must implement a broad variety of protocols, data formats, and open APIs, including HDFS, POSIX, NFS, S3, REST, JSON, HBase, and Kafka.

  • Automatic policy-based optimization: It must provide a way for the enterprise to specify where data is stored and whether it is in hot, warm, or cold storage.

  • Rapidly scalable distributed data store: Enterprise data needs can grow quickly and precipitously; the data fabric needs to make this happen, not obstruct it.

  • Multi-tenancy and security: Authentication, authorization, and access control must be enacted in a consistent manner, no matter where the data is or what type of system it runs on.

  • Resiliency at scale: Even under high usage, the data fabric must provide instant snapshots, and all applications must have the same view of the data when they are taken.

Ask for help

We all need it. No one succeeds pursuing a cloud transformation on an ad hoc basis. There needs to be a process to determine and execute on the right mix for your organization at a given point in time.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.