Your edge, your future
There's no turning back, and no reason anyone would want to: The edge, bursting with useful data, is the future. It's becoming the dominant source of all enterprise data. Its computer capabilities, combined with ever-advancing artificial intelligence, are growing at a fast clip. And add the economic forces at play—the expectation of computers doing more from afar, whether that's at a manufacturing site, a retail store, or inside your car—and we have an ever-evolving world of possibilities.
Please read: What's the edge really about? We asked our experts
The upshot is that there doubtless will be far more aggregate computing happening at the several layers of the edge than exists in the data center today. There will also likely be as much networking, albeit in a different form. In fact, five years from now, we'll look back at this pivotal time in enterprise computing and it will seem intuitively obvious that IT organizations had to move their focus from the data centers that dominated corporate computing for five decades.
There is a parallel of sorts. The Internet technologies commercialized during the dot-com boom in the late 1990s and early 2000s forever changed the scale of compute, storage, and networking in the data center as well as the technologies deployed to create that scale. Eventually, big data entered the picture and, combined with statistical and neural network software, allowed machine learning—envisioned three decades ago but impossible with the small datasets and puny parallel processors that existed until about a decade ago—to actually work.
With ML and AI, everything changes. Devices of all kinds gather up telemetry so they can help manage all aspects of themselves and deliver insights that compel either our action or that of another device. Put another way, all this alters the nature of the relationships that companies—and people—have with the world around them.
Please read: Advancing medicine with AI at the edge
The edge is about scope. Instead of massive banks of compute and storage encapsulated in a data center, we have a swarm of orders of magnitude more computing elements out there, right where the real world is happening. Pushing this IT infrastructure to the edge is necessary because too much data is generated "out there," which makes relying on data centers costly and ineffective because things wouldn't happen fast enough.
The edge is about action
The edge will have precisely enough ML intelligence to turn live data streams, not large datasets, into some sort of work. The edge is about action, propelled by machine learning intelligence that was initially constructed in the data center and set free to roam the networks being stretched, quite literally, to cover the entire world. The edge devices may do their inferring and, perhaps someday soon, their own training right where the world is happening.
Please read: Six predictions for the future of the edge
We are on the precipice of yet a new technological era. Today, of course, companies are not starting from scratch as they did during the initial computerization of the back office; there is no time for that. The people running companies need actionable insight, as the common phrase goes.
Everything outside of the data center—billions and perhaps someday trillions of devices—is being equipped with monitors of all kinds and networked to AI-enhanced compute. Data is generated in massive streams at speeds entirely too much for human beings to process.
But human capabilities aren't the point; the machines can run the show. These edge systems are about taking action, right there, right now. And that requires a new architecture, a new way of thinking, and expertise.
Intelligently building your intelligent edge
It's relatively easy to drop what amounts to a baby data center in a remote location. But any time you have more than a dozen applications running in a remote location and then many remote locations on top of that, IT organizations have to formalize the people, processes, and technologies.
Take the retail industry, which has had remote processing for running transactions and managing inventory for decades. Now, retailers are installing video cameras in their facilities to do pattern recognition and look for fraud within store aisles or at the self-checkout stand. The systems to do that are special servers driven by AI and supported by GPU processing. There are about a half dozen different pattern recognition applications, plus other sensing applications for asset management (including inventory and people), plus the point-of-sale applications already in place.
And all of the processes employed in the management of centralized IT—software release management, remote configuration and monitoring of hardware, change ticketing for tech support, remediation, and on and on—have to be applied to the edge.
That's why the edge is a more diffuse and potentially larger scale problem than the data center. The issues are more complex. And there will be many layers of edge on top of that, with infrastructure out there in the real world at the literal interface between people and things, but also points of presence and other kinds of aggregation edges that are not really part of the data center at all. All of these facts can catch IT organizations off guard if they just dive in.
The cultural shift will be just as jarring. Building data center infrastructure as we all know it—dense servers clustered to deliver scalable compute and storage over closed networks spanning a data center plus some legacy monolithic systems—will become an esoteric art as the focus of IT shifts from the data center to the edge.
The problem that IT organizations face will shift from "How do you stand up a room full of infrastructure to run 2,500 different applications?" to "How do you automate all kinds of interactions with people and machines and facilitate other kinds of infrastructure and deliver an experience or manage a process, end to end?"
Here is the other tricky bit. All edges are relative. Running some aspects of infrastructure in an intermediate place like a public cloud—perhaps in an aggregation layer with lots of analytics—makes it an aggregation edge relative to the further edge out there in the world. With the public clouds having hundreds of regions and then additional points of presence feeding into them, it is reasonable to expect that some of this capacity will be deployed as aggregation edges for expediency and data sovereignty reasons.
However, latency requirements for processing data in real time or near real time means that compute should reside at the furthest edge. Also, there is not enough bandwidth in the world—not even in future 6G networks—to physically move all data back up to a centralized data center for processing. This demands that storage be local and some data be deleted after a period of time.
Back to the video surveillance in the self-checkout line at a retailer. If someone scans the wrong barcode on an item, or forgets to scan one, the retailer would want to know immediately and send over an employee monitoring the queues to fix the issue. It has to happen in one second, maybe two seconds. Sending a video stream, even at 10 frames per second, up to a public cloud region will take tens of seconds, maybe longer, and while processing that could be fast given the scale of the infrastructure, the shopper will be driving home before the retailer figures it out.