Skip to main content

Artificial Intelligence Compliance Professional (AICP) Certification Training

H46HFS

Table of Contents

Table of Contents

    Course ID

    H46HFS

    Duration

    2 days

    Format

    ILT/VILT

    Overview

    This comprehensive course prepares you to navigate the complex landscape of AI regulation and compliance under the EU Artificial Intelligence Act. You gain deep expertise in understanding AI risk classifications, implementing governance frameworks, ensuring ethical AI development, and maintaining regulatory compliance across public and private sectors. You learn through practical experience with hands-on assignments that demonstrate real-world application of AI compliance principles.


    The course covers six critical areas:

    • Understanding the context and depth of the AI Act
    • Implementing trustworthy AI principles
    • Applying ethical AI frameworks
    • Navigating AI compliance in practical scenarios across various industries
    • Leveraging international standards and frameworks to support compliance efforts

    The blended learning approach provides exclusive access to six short pre-learning videos (about 1 hour in total), which cover the theoretical foundation. With that nearly all classroom time can practically be used for risk classification workshops, compliance framework building, and real case studies.

    Course ID

    H46HFS

    Duration

    2 days

    Format

    ILT/VILT

    Audience

    This course is ideal for AI compliance officers, legal professionals, data protection officers, risk management professionals, IT governance specialists, AI developers and engineers, and business leaders responsible for AI implementation and oversight.

    Prerequisites

    There are no formal prerequisites for this course, but we recommend you have the following before attending:

    • An understanding of basic AI concepts and terminology
    • Knowledge of data protection principles (GDPR familiarity helpful)
    • Experience in compliance, risk management, or legal roles (recommended)
    • Familiarity with organizational governance structures

    Objectives

    After completing this course, you should be able to:

    • Understand the purpose, scope, and key provisions of the EU AI Act
    • Classify AI systems according to risk levels (unacceptable, high-risk, limited risk, minimal risk)
    • Identify and fulfill obligations for AI providers, deployers, and other stakeholders
    • Implement data governance practices compliant with Article 10 of the AI Act
    • Conduct fundamental rights impact assessments (FRIA) for high-risk AI systems
    • Apply transparency and accountability requirements to AI systems
    • Recognize prohibited AI practices and understand penalty structures
    • Navigate AI compliance across different sectors (healthcare, finance, employment, public administration)
    • Integrate European and international standards (ISO 42001, ISO 23894, NIST AI RMF) into compliance frameworks
    • Establish governance structures and oversight mechanisms for AI systems
    • Implement ethical AI principles including fairness, bias mitigation, and human oversight
    • Manage incident reporting and response procedures for AI systems

    Certifications and related exams

    This course prepares you for the EXIN Artificial Intelligence Compliance Professional certification exam.


    Divider

    Course outline

    Module 1: Context of the EU AI Act


    • Purpose and scope of the EU AI Act
    • Primary objectives: smooth functioning of internal market and promotion of human-centered AI
    • Scope of application: what is covered and what is excluded
    • Key definitions and stakeholder roles (provider, deployer, importer, distributor, etc.)
    • Governance structure at EU, national, and operational levels

    Module 2: EU AI Act in Depth


    • Key provisions and regulatory framework
    • Risk-based classification system
      • Unacceptable risk (prohibited AI)
      • High-risk AI systems (requirements and obligations)
      • Limited risk AI (transparency obligations)
      • Minimal or no risk AI
    • Requirements for high-risk AI systems
      • Product compliance and CE marking
      • Technical documentation requirements
      • Quality and risk management systems
      • Data governance obligations
      • Conformity assessments
    • General-purpose AI (GPAI) models and systemic risks
    • Code of practices for GPAI
    • Intellectual property implications
    • Open-source vs. closed-source considerations
    • Provider obligations: conformity assessments, documentation, regulatory notification
    • Compliance and enforcement mechanisms
    • Penalties for non-compliance
    • Accountability and incident reporting

    Module 3: Trustworthy AI


    • Privacy and data protection under the AI Act
    • Importance of transparency in AI systems
    • Traceability requirements and implementation
    • Data minimization principles
    • GDPR principles applied to AI scenarios
    • Role of transparency in building public trust

    Module 4: Ethical AI


    • Key ethical principles in AI development
      • Transparency
      • Accountability
      • Fairness and equality
      • Privacy and data protection
      • Safety and security
      • Human-centric design
    • Application of ethical guidelines to real-world scenarios
    • Human rights considerations
    • Fundamental rights impact assessments (FRIA)
    • Importance of human oversight in AI systems
    • Human-in-the-loop vs. human-on-the-loop models

    Module 5: EU AI Act in Practice


    • AI in the public sector
      • Public decision-making
      • Crime prosecution and law enforcement
      • Elections and democratic processes
      • Risks and mitigation strategies
    • AI in the private sector
      • Finance and insurance compliance
      • Healthcare applications
      • Employment and education systems
      • Autonomous driving
      • Advertising and tourism
    • Sector-specific requirements and best practices
    • Impact assessments by industry

    Module 6: Frameworks to Support Compliance


    • European standards
      • CEN/CLC/TR 18115: Data governance practices
    • International standards
      • ISO 42001: AI management systems
      • ISO 23894: AI risk management
      • ISO/IEC TR 24368: Ethical considerations
      • NIST AI Risk Management Framework
    • Comparison and alignment of frameworks
    • Integration strategies for compliance programs
    • Practical application of standards

    5 reasons to choose HPE as your training partner

    1. Learn HPE and in-demand IT industry technologies from expert instructors.
    2. Build career-advancing power skills.
    3. Enjoy personalized learning journeys aligned to your company’s needs.
    4. Choose how you learn: in-person , virtually , or online —anytime, anywhere.
    5. Sharpen your skills with access to real environments in virtual labs .

    Explore our simplified purchase options, including HPE Education Learning Credits .

    Lab outline

    Lab 1: AI System Classification and Risk Assessment

    • Analyze sample AI systems and classify them by risk level
    • Identify applicable requirements based on classification
    • Document rationale for risk categorization

    Lab 2: Fundamental Rights Impact Assessment (FRIA)


    • Conduct a FRIA for a high-risk AI system scenario
    • Identify affected fundamental rights
    • Assess likelihood and severity of impacts
    • Develop mitigation strategies

    Lab 3: Data Governance Implementation


    • Apply Article 10 requirements to a training dataset
    • Identify data quality issues and biases
    • Implement data governance controls
    • Document data lineage and processing activities

    Lab 4: Compliance Documentation


    • Prepare technical documentation for a high-risk AI system
    • Create conformity assessment records
    • Develop transparency documentation for users
    • Draft incident response procedures

    Lab 5: Multi-Stakeholder Compliance Scenario


    • Work through a complex scenario involving multiple stakeholders (provider, deployer, importer)
    • Assign responsibilities according to the EU AI Act
    • Develop compliance workflows
    • Create reporting and oversight mechanisms

    Recommended for you