Deep learning

What is Deep Learning?

Deep learning is a type of machine learning that uses algorithms meant to function in a manner similar to the human brain.

Related to AI and machine learning

Deep learning is a subset of machine learning (ML), which is itself a subset of artificial intelligence (AI). The concept of AI has been around since the 1950s, with the goal of making computers able to think and reason in a way similar to humans. As part of making machines able to think, ML is focused on how to make them learn without being explicitly programmed. Deep learning goes beyond ML by creating more complex hierarchical models that are meant to mimic how humans learn new information.

 

Neural networks drive deep learning

In the context of AI and ML, a model is a mathematical algorithm that is trained to come to the same result or prediction that a human expert would when provided with the same information. In deep learning, the algorithms are inspired by the structure of the human brain and known as neural networks. These neural networks are built from interconnected network switches designed to learn to recognise patterns in the same way the human brain and nervous system does.

 

Deep learning is driving the future

Many recent advances in AI were made possible by deep learning. From recommendations on streaming services to voice assistant technologies to autonomous driving, the ability to identify patterns and classify many different types of information is crucial for processing vast amounts of data with little to no human input.

How does deep learning work?

While the original goal for AI was broadly to make machines able to do things that would otherwise require human intelligence, the idea has been refined in the decades since. François Chollet, AI researcher at Google and creator of the machine learning software library Keras, says: “Intelligence is not a skill in itself, it’s not about what you can do, but how well and how efficiently you can learn new things."1

Deep learning is focused on improving that process of having machines learn new things. With rule-based AI and ML, a data scientist determines the rules and data set features to include in models, which drives how those models operate. With deep learning, the data scientist feeds raw data into an algorithm. The system then analyses that data, without specific rules or features preprogrammed into it. Once the system makes its predictions, they are checked against a separate set of data for accuracy. The level of accuracy of these predictions – or lack thereof – then informs the next set of predictions the system makes.

“Deep” refers to the many layers the neural network accumulates over time, with performance improving as the network gets deeper. Each level of the network processes its input data in a specific way, which then informs the next layer. So the output from one layer becomes the input for the next.

Training deep learning networks is time consuming and requires large amounts of data to be ingested and tested against as the system gradually refines its model. Neural nets have been around since the 1950s, but only in recent years have both computational power and data storage capabilities advanced to the point where deep learning algorithms can be used to create exciting new technologies. For example, deep learning neural networks have made it possible for computers to carry out tasks such as speech recognition, computer vision, bioinformatics and medical image analysis.

 

1. Lex Fridman Podcast #120, “ François Chollet: Measures of Intelligence”, August 2020.

 

Deep learning v machine learning

While all deep learning is machine learning, not all machine learning is deep learning. Both technologies involve training against test data to determine which model best fits the data. However, traditional machine learning methods require a certain level of human interaction to preprocess the data before the algorithms can be applied.

Machine learning is a subset of artificial intelligence. Its aim is to give computers the ability to learn without being specifically programmed on what output to deliver. The algorithms used by machine learning help the computer learn how to recognise things. This training can be tedious and requires a significant amount of human effort.

Deep learning algorithms go a step further by creating hierarchical models that are meant to mirror our own brain’s thought processes. It uses a multi-layered neural network that does not need to preprocess the input data in order to produce a result. Data scientists feed the raw data into the algorithm, the system analyses the data based on what it already knows and what it can infer from the new data, and it makes a prediction.

The advantage of deep learning is that it can process data in ways that simple rule-based AI cannot. The technology can be used to drive clear business outcomes as diverse as improved fraud detection, increased crop yields, improved accuracy of warehouse inventory control systems, and many others.

 

Current applications of deep learning

Companies in many sectors are applying deep learning models to address a variety of use cases. Below are just a few of the many applications of deep learning in the real world.

Healthcare: Today’s medical industry is generating vast amounts of data. Being able to quickly and accurately analyse this data can contribute to improved patient outcomes in a number of ways. Deep learning algorithms are being applied in areas such as medical research, imaging analytics, disease prevention, guided drug development and natural language processing – which can be especially helpful for filling out free text clinical notes in electronic health records (EHRs).

Manufacturing: Manufacturers need to deliver higher quality products and services faster and with lower costs. Many companies are adopting computer-aided engineering (CAE) to reduce the time, expense and materials needed to develop physical prototypes to test new products. Deep learning can be used to model very complex patterns in multidimensional data and improve the analytics accuracy of testing data.

Financial services: Fraud is a growing problem in many industries but particularly so for financial service providers. Deep learning can be used to identify out-of-pattern behaviour quickly and cost-effectively. Insights delivered from deep learning models can also help to more accurately evaluate the credit risk of a loan applicant, predict share price fluctuations, automate back-office operations and advise clients on financial products.

Public sector: As more departments, systems and processes become digitised, government agencies can use deep learning to increase automation and make civil servants more efficient. Image detection and classification can make it easier for law enforcement to find persons of interest in public spaces. Visa and immigration applications can be streamlined with algorithms to automate certain aspects of processing. Airports are using deep learning to improve security, enhance operations and automate queue management. Deep learning models can even be used to help predict traffic conditions and allow local authorities to take proactive steps to ease road congestion.

 

Accelerate deep learning adoption with HPE

Deep learning is now more accessible than ever before for organisations of any size. From getting started to optimising to scaling, HPE can guide your journey to accelerate data insights that lead to breakthrough innovations. We have the infrastructure building blocks, expertise and access to validated partners to meet your business goals.

Unravel the complexity of deep learning and create your ideal solution. HPE’s industry-leading high-performance compute, intelligent data platforms and high-speed networking fabric allow you to deploy deep learning at any scale. Rapidly move beyond proofs of concept with HPE Pointnext Advisory and Professional Services – providing the expertise and services to accelerate your AI project deployments from years to months to weeks. And gain instant access to the AI tools and data you need using HPE Ezmeral ML Ops, a container-based solution to support every stage of the machine learning life cycle.

The HPE Deep Learning Cookbook provides a set of tools to characterise deep learning workloads and to recommend the optimal hardware/software stack for any given workload. The recommendations presented through our Deep Learning Cookbook are based on a massive collection of performance results for various deep learning workloads on different technology stacks and analytical performance models. The combination of real measurements and analytical performance models enables us to estimate the performance of any workload and to recommend an optimal stack for that workload.

Together with NVIDIA, HPE offers a leading portfolio of optimised AI and deep learning solutions. We enable deep learning through online and instructor-led workshops, reference architectures and benchmarks on NVIDIA GPU accelerated applications. Our solutions are differentiated by proven expertise, the largest deep learning ecosystem, and AI software frameworks.