Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

5 most critical IT trends that will dictate 2017

IT organizations must become much more sophisticated at leveraging the cloud, smartly deploying remote data centers, and getting more fine-grained in their risk assessment.

Between cloud, analytics, edge computing, and new storage technologies, you’ll see no lack of 2017 predictions out there. We’ve spiced ours up with some pointed advice about how to hit the ground running in five critical areas.

Innovation in your inbox. Sign up for the weekly newsletter.

1. 'On-premises vs. off-premises' will no longer apply

Simplistic binary arguments about "on-premises vs. off-premises" are over. The answer is both; it will be a hybrid IT world. The agility advantages of the cloud are so overwhelming that almost every enterprise will host some applications or data off-site, even if it’s “only” software as a service such as collaboration or customer relationship management. 451 Research’s latest "Voice of the Enterprise: Cloud Transformation" survey of IT buyers indicates that 41 percent of all enterprise workloads are running in some type of public or private cloud, a number expected to rise to 60 percent by mid-2018.

The questions to ask in 2017 are which specific applications and data should be hosted on premises or on specific cloud platforms, and how best to use software-defined infrastructure and automation to rapidly roll out applications to meet new needs in areas such as mobile and social. Achieving those goals requires a clear set of operational processes and infrastructure requirements.

To answer the “what to host where” question, you can develop your own (or leverage third-party) methodologies and formula for a workload-by-workload understanding of your data, performance, and security needs, and the capabilities of competing cloud providers. Without this groundwork, you risk sticker shock from unexpected cloud expenses, as well as performance issues.

The most innovative organizations will become expert at new approaches such as the lightweight virtualization provided by containers. By encapsulating only the application, rather than an entire operating system, containers reduce the cost, time, and complexity of rolling out “experimental” applications.

Software-defined infrastructure, which uses automation and scripts to quickly spin up (and spin down) resources as needed, allows organizations to innovative quickly, fail quickly, and revamp applications without paying for expensive fixed infrastructure.

The discussion should also extend to implementing new financing models for on-premises infrastructure, adapting traditional purchase cycles and procurement processes to make on-prem infrastructure as flexible, elastic, and OpEx-friendly as public cloud options. Such offerings could help customers achieve cloud-like cost savings for infrastructure they decide to keep in-house for performance, security, control, or other reasons.

2. Edge computing will reshape data centers

Much as mobile devices reshaped end-user computing, the rise of remote IT and Internet of Things applications will extend the edges of the data center to the periphery of the corporate network and beyond. The reasons are performance and the need to react to changing conditions in real time.

Think of location data from hundreds of vehicles triggering a shift in traffic signals to ease congestion, or performance data from sensors in a locomotive signaling a possible breakdown. In each case, collecting and analyzing the data at the edge where it is generated, rather than waiting for it to be processed at a central site, is the only way to trigger action at digital speed.  

Nearly two out of three of IT professionals surveyed by Green House Data have or plan to deploy an edge data center in the next 12 months. Organizations that built their infrastructures around more centralized data storage and analytics will need to think through their architectures and processes. These include the type of server, storage, and network infrastructure required at the edge vs. the core, the management of those edge devices, and the analysis and movement of data from edge to core.

3. Security will matter more than ever

In digital organizations IT often is the business, or at least its face to the customer. With the size, scale, and variety of applications, services, and data in play, any threat to the IT infrastructure could put the entire enterprise at risk. In 2017, organizations will learn the only way to properly identify and mitigate vulnerabilities is to make risk management an early and integral part of every conversation about IT.

While a seemingly endless stream of breaches will keep security at center stage, smart IT leaders will think beyond hackers to anything that could endanger their applications or data. This includes data protection and the high-availability requirements of business-critical applications.

As more workloads move to the cloud, organizations can take heart that the level of security and overall risk management available from cloud providers is constantly improving. But they must still take responsibility for their own infrastructure, and assess whether any given environment (cloud, on-prem, or hybrid) meets the needs of any specific application or dataset.

In all areas of risk, follow the best practice of doing a very granular and precise evaluation of the specific needs of every application and workload. Then take the same detailed approach to understanding the level of risk mitigation available from every cloud provider, including which APIs they offer to give customers precise control over security and other areas, and the costs of added performance or risk management capabilities.

4. The next wave of flash will solve new problems

Flash was first used for applications that needed extreme performance and were critical enough to justify its higher cost. The second wave of flash was inexpensive enough to be affordable for mainstream apps. We’re now on the cusp of the third wave. With the rise of Non-Volatile Memory Express (NVMe) and the expected shipment of next-gen storage-class memory, customers will be able to use flash to solve new problems. 

NVMe combines the benefits of RAM, such as low cost and high speed, with the persistent storage of data even when power to the system is turned off or the system is rebooted. One near-term benefit: The use of “in-memory” computing to boost performance will expand beyond its traditional niche of databases into other applications, such as big data analytics at the edge of the network, and even artificial intelligence (AI) to drive compelling new customer services.

Expect the earliest adoption to come as internal storage on servers. This will bring much higher performance to what had been solid-state storage, whose performance was limited by the SATA storage interface. Mainstream adoption of NVMe on all-flash arrays, already the fastest growing segment of the market, will require extending support for NVMe to front- and back-end storage fabrics.

In 2017, look for progress on other associated ecosystem components, such as file systems that are aware of persistent memory, operating system support for storage-class memory, and processors designed to use both DRAM and newer storage-class memory. Also look for support from vendors such as Microsoft for memory-level zero-copy capabilities that speed data storage by allowing the operating system to write data without the need for application-level IO operations. Microsoft is working on adding storage-class memory support to Windows, providing such zero-copy access. Note, however, that this will require new types of drivers and the loss of some capabilities, such as encryption and compression.

5. Big data will come of age

Big data—the analysis of very large volumes, varieties, and velocities of data—was supposed to drive revolutionary business insights. It still can, and will, as we’re on the cusp of new analytic capabilities using information from devices on the Internet of Things, customers, our supply chains, and our internal IT networks. Big data analytics is also the precursor to true AI, delivering real-time, machine-learning answers to natural-language queries.

But big data is no longer in the honeymoon period. Too many customers are still kicking the tires with too few tangible results. 2017 is the year to stop (if you’ve been tempted) doing one-off big data experiments in hopes of striking pay dirt. The most successful companies will operationalize and industrialize analytics from the initial data discovery to embedding predictive analytics into business operations, applications, and machines. This includes the ability to spin big data platforms and infrastructure up and down quickly and cost effectively, for data acquisition, management, integration, and data quality.

Moving to operational excellence

Success (or even survival) in 2017 requires moving at digital speed. That means driving business insights and expanding IT beyond the walls of the data center while minimizing cost and risk. IT organizations must become much more sophisticated at leveraging the cloud (on prem and off), smartly deploying remote data centers, and getting more fine-grained in their risk assessment. They must also make effective use of new technologies such as NVMe and next-gen storage-class memory, and move big data beyond hopeful experiments to tried-and-true operational excellence.

This might sound like a daunting list, but the clock is ticking and your competitors are working hard on these must-have priorities. Are you?

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.