Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
Every time a new, disruptive technology arrives on the scene, there needs to be a way to ensure that it brings real value to the business. With all of the excitement around artificial intelligence, companies are not only concerned about whether AI is capable of delivering such value, but also where they should begin.
AI developments are coming fast, and the opportunity to apply AI to solve business problems is real. The areas primed for investment and experimentation in the coming year include recommendation systems, automated customer service, fraud analysis, and automated threat intelligence and prevention systems. AI technologies will quickly expand to other areas.
Moreover, by 2021, artificial intelligence will create nearly $3 trillion in business value while recovering more than 6 billion hours of worker productivity. This does not necessarily entail replacing human workers—by 2022, one in five workers engaged in work that is not easily automated will depend on insights gleaned from AI to perform specific tasks.
That’s the promise, at least. But for IT, operations staff, and company management, scaling the AI mountain to grasp such value seems difficult or impractical. Companies fret that they don’t have the resources or know-how to define potential use cases, much less build out an AI pilot.
The journey need not be so daunting. Teams charged with “figuring out AI” for their employers have three things working in their favor:
1. Flexible AI platforms and applications
There are a growing number of out-of-the box AI solutions available to companies interested in testing the AI waters or building out full-fledged production systems. You’ve probably heard of AI applications designed to tackle specific tasks, such as fraud detection and QA optimization. Moreover, large public cloud players are launching platforms and software tools in the name of democratizing AI. Such tools simplify implementation of AI applications in commercial settings. They include Amazon SageMaker, Microsoft’s Azure-based AI platform offerings, and the recently announced Cloud AutoML from Google.
2. Access to strategic advice and AI know-how
Companies can not only get help building AI tools and integrating them with existing production systems, but also bring in experts to help with everything from developing an AI strategy aligned with the goals of the business to reskilling and training their teams. This often entails examining how competitors or companies in adjacent verticals are leveraging AI, and then looking into one’s own organization and asking, “Can I do my business better?” Strategic planning can get everyone on the same page in terms of understanding what’s possible with AI and where the most promising opportunities lie.
3. Existing data sources that can be repurposed for AI
Companies may already have the data needed to drive AI algorithms, identify patterns and anomalies, and train advanced machine learning applications. The data may have to be prepared and standardized for training the AI, but having access to the best data is better than designing the best algorithm. Data from analytics and ERP systems, historian data and business archives, and real-time streams from controllers, sensors, and IoT systems can power an AI pilot or serve as the foundation for a full-fledged rollout. In other words, the underutilized data already present in your company may become the basis of future AI applications that deliver insights, activate new automated processes, and boost overall efficiency.
The glut of data generated by analytics, IoT, and other IT and OT systems has long been regarded as a problem. As sensors, applications, mobile devices, and networked systems proliferate, and vendors develop ever-more advanced network and storage systems, the amount of data that companies have to manage has exploded. IDC predicts 44 zettabytes of digital data will be created by the year 2020, up from 4.4 zettabytes five years ago.
According to a recent report by the McKinsey Global Institute, “While the volume of available data has grown exponentially in recent years, most companies are capturing only a fraction of the potential value in terms of revenue and profit gains.” Companies often use a relatively small amount of the available data to drive production systems or gain insights into operations, leaving the rest to be archived, discarded, or ignored.
Whether it’s basic business analytics, historian series, or streaming data from equipment on the plant floor, the fact that it’s available doesn’t mean that it should become the basis of an AI pilot. The start of the AI conversation should instead be focused around potential use cases—discrete steps that define a process or address a problem tied to a specific business outcome. Both business, data, and technology teams need to be in the same room and sharing a common language to describe where they intend to go with AI. At the end of the day, there needs to be a common vision that guides stakeholders in this discussion.
Once the team identifies the use cases, that's when they can start planning to take in the data that has been collected over the years to build out those capabilities. Depending on the situation, a pilot may start with experimentation or a proof of concept to validate the use case and the data. It will further be necessary to identify the right level of talent necessary to carry out the pilot, and potentially bring in external domain expertise to partner closely with the internal team.
How is this approach being applied to use cases in the real world? Prescriptive analytics is a fast-growing area of business AI and is used at a wide variety of industrial companies, white-collar firms, and public agencies. In the past, such organizations have been reactive when it comes to replacing machinery, dealing with customer service complaints, or addressing other needs. Prescriptive analytics not only enables companies to identify issues requiring action but automates those actions.
A prescriptive analytics application can help companies deal with problems before they escalate to full-blown crises. The AI algorithms determine optimal solutions by combing through real-time data from equipment and applications and additional business data, and then automate the responses.
For instance, a traditional steel foundry may have to shut down a production line in the event of equipment failures. But by feeding temperature readings, vibration indicators, output measures, historian, and other data sources into a prescriptive analytics application, the foundry can identify which pieces of equipment are more likely to fail. The system can automatically reduce the use of that equipment and schedule maintenance before a fault takes down the whole line.
For such projects, data sources will already be available to feed into the AI, often using turnkey components that can take data from a variety of sources. They include structured and unstructured data—everything from Excel spreadsheets and online geodata to enterprise-class Hadoop, cloud, and SQL databases.
New sources may need to be added to the system. For instance, legacy equipment at a factory might be retrofitted with up-to-date temperature sensors, or a public safety agency may have to incorporate remote camera feeds to take advantage of algorithms used to identify potential safety issues in public spaces.
In the next five years, artificial intelligence will impact businesses in profound ways. Much like the early days of the World Wide Web, we’re at the stage in which AI technologies seem new and often difficult to grasp. Indeed, it will very difficult to go on the AI journey alone. But the good news is that tools, expert help, and even critical data sources can be readily accessed—and enable companies to deliver true business value from AI.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.