What is Deep Learning?

Deep learning is a type of machine learning in which computers form large artificial neural networks that resemble those found in the human brain.

Deep learning definition

In deep learning, large artificial neural networks are fed learning algorithms and ever-increasing amounts of data, continuously improving their ability to "think" and "learn" the more data they process. "Deep" refers to the many layers the neural network accumulates over time, and performance improves the deeper the network gets. While most deep learning is currently done with human supervision, the aim is to create neural networks that are able to train themselves and "learn" independently.

Why deep learning?

Neural nets have been around since the 1950s, but only in recent years have both computational power and data storage capabilities advanced to the point where deep learning can be used to create exciting new technologies.

While most enterprises have yet to incorporate deep learning into their business processes or products, this type of machine learning is behind "smart" technology ranging from voice- and image-recognition software to self-driving cars. Advances in deep learning and robotics may soon lead to smart medical imaging technology that can reliably make diagnoses, self-piloting drones, and self-maintaining machinery and infrastructure of all kinds.

HPE Pointnext

HPE Pointnext

Leverage our newly enhanced Pointnext advisory, professional, and operational support services offering for deep learning, including a new HPE GreenLake Flex Capacity offering for deep learning.

Learn More

Flexible consumption

Flexible consumption

Consume your deep learning infrastructure using a flexible, on-demand consumption model. Get scalable capacity as needed, paying only for what you use, including servers, storage, networks, software, and services.

Learn More

Let’s talk

Speak with an HPE expert about how you can get started with deep learning solutions.

Resources

Best Practices Guide : Managing data at petabyte scale with object storage

Best Practices Guide | PDF | 5.98 MB

Learn how to manage object storage solutions with new breakthrough capabilities that are simple, cost effective, scalable, and can be rapidly deployed.

Best Practices Guide : Speed Innovation with ANSYS and high-performance computing

Best Practices Guide | PDF | 4.57 MB

For engineering teams that demand CAE simulation turnaround in hours – not days or weeks – high-performance computing and ANSYS software can accelerate your breakthrough product design. Learn how to reduce design cycle time by 40 percent* when supporting multiple, concurrent complex simulations for prototyping. Register for a high-performance computing best practices guide for CAE. Explore how HPE Apollo 2000 systems offer higher density and scalability than competitors, with independent serviceability of networking modules enabling greater engineer productivity and competitive advantage for your business. Learn how data center solutions like high-performance computing together with ANSYS software can speed computer-aided simulation and time-to-market while lowering TCO.

White Paper : Digital transformation trends in IT services

White Paper

Digital transformation trends in IT services

Video : All-in on AI

Video | 3:40

All-in on AI

Blog Post : Deep Learning Demystified for Faster Intelligence Across All Organizations

Blog Post

HPE and NVIDIA empowering customers with new AI solutions that are simple to deploy and manage, highly efficient, and flexible to handle the challenges of tomorrow.