Course data sheet
HPE Private Cloud AI Bootcamp
H54GRS
Table of Contents
Overview
This course builds on the Basic AI Applications and Workloads Training (H54FSS) course, advancing into HPE Private Cloud AI. With contents spread across enterprise-scale AI design, deployment, and optimization, the participants will explore sophisticated workload architectures, multi-modal AI, advanced fine-tuning strategies, retrieval-augmented generation (RAG) at scale, and orchestration of multi-agent systems. The course emphasizes automation, MLOps, performance tuning and NVIDIA NeMo, preparing learners to manage and future-proof AI in enterprise environments.
Through a blend of instructor-led modules and hands-on labs, you gain practical skills to design, deploy, and govern advanced AI workloads. This training concludes with strategic guidance for innovation and building AI adoption roadmaps in large organizations.
This course includes an initial discussion to understand audience background, setup prerequisites, and tailor the examples and exercises in the course according to audience level.
-
Before attending this course, you should have the following:
- Intermediate Python Programming knowledge:
- Familiarity with python syntax, functions, and loops
- Classes, list comprehension, and JSON parsing
- Numpy, Pandas, and data visualization
- Intermediate machine learning experience
- Exposure to machine learning and/or deep learning frameworks
- Exposure to libraries like Keras in either TensorFlow or PyTorch
- Containers
- A basic understanding of container technologies (Docker and Kubernetes) is optional but helpful
-
After completing this course, you should be able to:
- Design and implement advanced AI workload architectures, including distributed and hybrid models
- Build and manage large-scale data pipelines with feature stores and multimodal data integration
- Develop and deploy multi-modal AI applications combining text, image, and structured data
- Apply advanced fine-tuning and transfer learning methods such as LoRA, QLoRA, and adapters
- Design and optimize enterprise-scale Retrieval Augmented Generation (RAG) pipelines
- Orchestrate and manage multi-agent AI systems with memory, planning, and collaboration
- Understand hybrid loud orchestration and AIOps capabilities with HPE OpsRamp
- Overview of hybrid cloud automation and MLOps with HPE Morpheus Enterprise
- Automate end-to-end AI lifecycle management using CI/CD, registries, and drift detection
- Build agentic AI systems with NVIDIA NeMo for performance, reliability, and cost efficiency
| Module 1: Advanced AI Workload Architectures |
|
| Module 2: Advanced Data Engineering for AI |
|
| Module 3: Multi-Modal AI Applications |
|
| Module 4: Fine-Tuning and Transfer Learning |
|
| Module 5: Retrieval-Augmented Generation at Scale |
|
| Module 6: Orchestration and AIOps with HPE OpsRamp |
|
| Module 7: Advanced Automation and MLOps |
|
| Module 8: NVIDIA NIM Foundations and Enterprise Model Deployment |
|
| Module 9: Building Enterprise RAG Pipelines with NVIDIA NIM |
|
| Module 10: NVIDIA NeMo for Fine-Tuning and Model Customization |
|
| Module 11: NeMo Guardrails for Enterprise AI Governance |
|
5 reasons to choose HPE as your training partner
- Learn HPE and in-demand IT industry technologies from expert instructors.
- Build career-advancing power skills.
- Enjoy personalized learning journeys aligned to your company’s needs.
- Sharpen your skills with access to real environments in virtual labs .
Explore our simplified purchase options, including HPE Education Services – Learning Credits .
| Lab 1: Designing Advanced AI Architectures |
|
| Lab 2: Building a Feature Store |
|
| Lab 3: Multi-Modal Data Ingestion |
|
| Lab 4: Building a Multi-Modal Model |
|
| Lab 5: Fine-Tuning with LoRA |
|
| Lab 6: Prompt-Tuning vs Full Fine-Tuning |
|
| Lab 7: Implementing Enterprise RAG |
|
| Lab 8: Query Expansion in RAG |
|
| Lab 9: Multi-Agent Orchestration |
|
| Lab 10: Agent Memory and Planning |
|
| Lab 11: CI/CD for AI Pipelines |
|
| Lab 12: Fine-Tuning Models Using NVIDIA NeMo |
|
| Lab 13: Building a GPU-Accelerated RAG Pipeline with NeMo |
|
| Lab 14: Implementing Guardrails and Governance with NeMo Guardrails |
|
| Lab 15: Multi-Agent AI Workflows Using NeMo Tools |
|
© Copyright 2026 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
All third-party marks are property of their respective owners.
a50014620enw, H54GRS A.00, January 2026