Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
The expansion of choice in the server space is the single biggest change in IT in the past decade. Yet corporate IT organizations have a tendency to keep doing whatever they did before. For organizations that existed in the 20th century, that means maintaining a data center with physical servers, perhaps with an overlay of virtualization.
Newer organizations, like Netflix or online procurement vendor Coupa, are more likely to build their infrastructure entirely on the public cloud. Meanwhile, traditional organizations increasingly want the flexibility of the public cloud. And as startups grow, they start to need classic infrastructure: internal databases, corporate sign-in, shared file systems, and so on.
In this hybrid world, IT stops being a provider of a single service and instead brokers a variety of options for internal customers. Here's a quick guide to getting started as a hybrid IT service broker.
Most IT organizations already support the business in some way, just with limited options—more of whatever we did before. When the business wants to expand into new territory, IT has two choices: It can either remain passive or get involved in the process by acting as the actual broker. Scott Kwilinski, director of professional services at Sharp Electronics, points out that "many business units simply don't want to deal with IT. This is usually the result of IT not meeting business needs." So instead of working with central IT, the business routes around it.
As a result, IT is sometimes forced to support some monstrosity it did not create. Worse, perhaps, IT can become irrelevant. Different business units make their own deals, leading to SaaS and IaaS tools that don't interoperate, single points of failure, knowledge silos, redundancy, and complexity. Sometimes an unvetted provider will go out of business entirely, and the technology group will either need to scramble to find another provider to ward off failure of the software or even the line of business.
Luckily, there's a better way. Instead of ignoring the problem or forcing the company to use existing solutions, the technology group can act as the intermediary, helping to select and approve each new service and negotiating to bring it online. Once the service is working for the company, it can become a standard, get a service-level agreement, and eventually transition to self-service in some cases.
The best time for IT to get specification and change requests from the business units is during a new project's feasibility, proof-of-concept, or business-case phases. The sponsor will have some budget, some idea of what they need, and likely some idea of the service they would like to use. IT works with the business to create concrete requirements, along with expectations and success criteria. From there, a technical lead narrows down the choices and creates a proof of concept, to show the business what using the service will be like.
Anticipating this need, IT can create a private cloud service, with a clear understanding of the limits of that machine and the costs of scaling up. The great advantage of anticipation is that IT can create the cloud right now, for the existing need of virtual servers in the data center. For external vendors, IT compares vendor offerings with existing solutions and payment options.
For example, IT might already support Amazon Web Services (AWS) or Microsoft Azure. One of them might be the right vendor for the project, but how will the team handle billing for the different business units? Will IT pay one consolidated bill and then back-bill each business unit? Does IT need to create multiple accounts at a public cloud provider?
These are not idle questions, and they are unlikely to be nailed down as requirements at the start of the process. If IT can work with the business as a consultant and partner, with a vision for the project in mind from the beginning, it will be able to help steer the project to success.
In some cases, your legal department may need to be involved for the contract process. With modern platform- and software-as-a-service offerings, it is more likely that this is a review for technical issues and needs analysis. Once the review is done, the team can write the salient details of the service on a wiki or intranet page: how to get started, cost comparison, known issues, best for, worst for, and so on. The next time a question comes in from the business, IT will have reference points to help guide the conversation.
Like boat or business brokers, IT service brokers see customers with significant budgets that want very different things. Vendors sometimes treat unsophisticated customers badly. Waste and complexity result when business units all select their own vendors, and as mentioned before, IT typically has to pick up the pieces.
Once the first few systems are supported, it is time to institutionalize the brokerage. The ultimate goal is to provide self-service capability to the business units. That's when things get tricky.
Modern web applications are more than just a web server. They typically consist of several services stitched together. In Gmail, for example, login, tag, search, get details, and opening images and attachments are all separate services. For a wholesale company, the key elements might be catalog, pricing, search, checkout, and availability services.
With an automated strategy, the very servers these services run on may change over time. Typically, instead of an IP address, these services run on a subdomain or alias. That alias is changed in real time by the self-service software, which we'll call the service broker. The alias can change each time a new version appears.
Service brokers can move data in a number of ways. The easiest option is to create a sort of virtual address space. Applications read and write to the service broker, and the software figures out where the data is stored through back-end tables. This remains transparent to the application, even if the back-end changes.You just need a script to migrate the data. Getting the data to move automatically when space runs out will require either a fair bit of custom code or reliance on an external vendor. In some cases, that capability can be built into the physical servers, if the physical, private server is compatible with a specific public cloud offering.
Finally there is user migration, which means migrating the server that the user runs programs on. This could be because the server is overloaded or, perhaps, because the user is directed to run a specific version of the software. Company employees and friends and family, for example, might "drink their own champagne" and run an advanced beta version of the software. Regular customers run the current production build. Government users and and other risk-averse clients can run a release that is a month or two old and has the kinks worked out.
In this case, migrating users is as simple as keeping a registry of which users run which versions of the software, along with what servers are running that software. A software gate something like a load balancer directs the users to the correct server. Of course, this process needs to remain invisible to the end user.
If your organization has one or more data centers, you'll be familiar with issues like tracking, version control, production releases, and production support. Moving to a hybrid strategy with multiple platforms and vendors can worsen this problem. But it doesn't have to be that way
First, manage the change through processes that are centralized within IT. Second, consider a software layer, sometimes called a "facade," that sits between the users and the servers, providing a single, unified view and enabling self-service.
The term for this is "cloud service broker," and you might not need one. But if you do, you'll know.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.