Artificial Intelligence Compliance Professional for Europe (AICP-E)
Everything you need to know about EU AI Act compliance in 4 weeks.
Practical training on EU AI Act fundamentals and key aspects of artificial intelligence. Learn to develop ethical AI systems that protect personal data and ensure compliance.
- June 22 → July 22
- €1050 + VAT
- Theory, practical tasks in groups, tests
- AI Governance Handbook with typical compliance scenarios
- Electronic certificate via Accredible
- Sessions recordings included
- 10 classes
The course is based on the requirements of the EU AI Act
We developed this course to quickly provide professionals with quality knowledge on the application of the law and to prepare them in advance for its implementation.
We are now on the threshold of applying requirements to high-risk systems. The deadline for their entry into force is August 2, 2026. Providers of high-risk AI systems will be required to implement a quality and risk management system, as well as prepare detailed technical documentation for their products. Before entering the market, they must undergo a conformity assessment procedure, register the system in a special EU database, and obtain marking. During operation, they must ensure transparency of AI functioning, maintain automatic event logs, and guarantee the possibility of human oversight.
You can understand these requirements and learn how to comply with them in our course.
EU AI Act Implementation Timeline (2024-2030)
The EU AI Act officially became law in July 2024, with its requirements being introduced in phases to allow for a smooth transition. In February 2025, the first regulations took effect, focusing on banning AI systems that pose unacceptable risks. By August 2026, the majority of the law’s rules will apply to most AI technologies and providers. Finally, certain specialized systems and large-scale projects have until 2027 or even 2030 to meet all legal standards.
2024
EU AI Act published and enters into force; compliance requirements not yet mandatory.
- July 12, 2024: The AI Act is published in the Official Journal of the EU.
- August 1, 2024: The Act enters into force. While the law is officially "active," the requirements are not yet mandatory (Article 113).
- November 2, 2024: EU Member States must identify and list authorities responsible for protecting fundamental rights (Article 77(2)).
2025
Prohibited AI systems banned; GPAI rules and AI governance framework take effect.
- February 2, 2025: Bans on Prohibited AI systems (e.g., social scoring, specific biometric systems) start to apply. Requirements for AI literacy for staff also become mandatory. (Article 113(a), Recital 179)
- May 2, 2025: Deadline for the Commission to have "Codes of Practice"Ю ready to help developers comply with the rules. (Article 56(9))
- August 2, 2025: Several major chapters begin to apply:
- General-Purpose AI (GPAI) Models: Rules for GPAI providers become mandatory. (Article 113(b))
- Governance & Penalties: New structures for oversight and rules for fines and penalties take effect. (Chapter VII, Articles 99-100)
- Member State Obligations: Countries must designate their national authorities and report on their financial/human resources. (Article 70(2, 6))
- Existing GPAI: Providers of GPAI models already on the market before this date have until August 2027 to comply. (Article 111(3))
2026
High-risk AI systems deadline; AI regulatory sandboxes operational across EU.
- February 2, 2026: The Commission must provide detailed guidelines on how to implement rules for High-Risk AI systems and post-market monitoring. (Articles 6(5), 72(3))
- August 2, 2026: The bulk of the AI Act becomes applicable. (Article 113)
- High-Risk Systems: Rules apply to operators of high-risk systems placed on the market before this date only if they undergo significant design changes. (Article 111(2))
- Regulatory Sandboxes: Every EU country must have at least one operational AI Regulatory Sandbox to help companies test AI safely. (Article 57(1))
2027
Final GPAI compliance deadline; high-risk AI classification rules apply.
- August 2, 2027: The specific high-risk classification rules under Article 6(1) (systems that are products or components of products subject to EU safety laws) become applicable. (Article 113)
- August 2, 2027: Deadline for all existing GPAI models (those available before August 2025) to be fully compliant with the Act. (Article 111(3))
2028
AI Act evaluation: AI Office, voluntary codes, and high-risk categories reviewed.
- August 2, 2028: The Commission performs several major reviews:
- Evaluating the performance of the AI Office. (Article 112(5))
- Assessing the impact of voluntary codes of conduct. (Article 112(7))
- Reviewing the need for changes to High-Risk categories (Annex III) and transparency rules (Article 50). (Article 112(2))
- Reporting on the energy efficiency of GPAI models. (Article 112(6))
- December 1, 2028: The Commission must report on its delegated powers to ensure the law remains up to date. (Article 97(2))
2029
AI Act implementation report; Commission’s delegated powers expire unless extended.
- August 1, 2029: The Commission’s specific powers to adopt new rules (delegated acts) will expire unless the EU Parliament or Council decides to extend them. (Article 97(2))
- August 2, 2029: The Commission submits a major evaluation and review report of the entire AI Act (this will happen every four years). (Article 112(3), Recital 174)
2030
Public authority AI compliance; large-scale IT systems final deadline (Dec 31).
- August 2, 2030: Public authorities using high-risk AI systems must be fully compliant with all rules and obligations. (Article 111(2))
- December 31, 2030: This is the final deadline for large-scale IT systems (like those used for border control or justice, listed in Annex X) to meet the requirements of the Act. (Article 111(1))
Who is this AI training for?
Compliance specialists who want to stay informed
Trainers with practical experience share their knowledge on the latest regulatory changes and provide real-world implementation cases.
Experts who need to demonstrate their expertise
After successful completion, all participants receive an official electronic certificate via Accredible. This allows you to showcase your commitment to continuous skill development on LinkedIn, your CV, or during professional conversations with clients.
Specialists who want a competitive edge in the job market
According to a McKinsey report, 92 percent of surveyed executives expect to increase AI spending over the next three years. Employers will likely expect employees to adapt to these changes, including safety and compliance requirements. Gaining expertise in AI governance now can boost your job market value for years to come.
Get a free demo-lesson
We’ve created a demo-lesson for those who want to understand does the course cover their interests. The lesson includes:
- Theory on General Purpose AI, AI risks, regulatory approaches in different jurisdictions, and liability for violations in AI systems.
- An excerpt from an online lesson about AI risks.
- Real-world AI failure cases.
- A detailed risk taxonomy table from The Massachusetts Institute of Technology.
That’s why people choosing our training
Hands-on learning
Participants are engaged in practical exercises and tests that reflect real-world scenarios. Moreover, they work on a practical case with the trainers that reflect the current challenges in AI regulation.
The course content was well-structured, offering practical insights into how businesses can navigate compliance challenges while integrating AI technologies. Additionally, the interactive discussions and case studies helped deepen my understanding of real-world applications and the evolving legal landscape in AI.
Legal Counsel
Certified trainers
The course is led by industry experts with internationally acknowledged certifications (AIGP, FIP, CIPP/E, CIPP/US, CIPM) and solid professional experience in AI and data protection.
I’m very happy with Petruta Pirvan as a trainer. She has extensive knowledge of the subject and explains things clearly and concisely, even for beginners.
Data Privacy Specialist
Comprehensive coverage
From key concepts to enforcement strategies, this training gives you everything you need to become proficient in AI compliance and develop a deep understanding of the EU AI Act.
The course covers beginner to intermediate-level knowledge on AI compliance in an engaging and interactive way. I particularly valued the interactions and the opportunity to hear multiple perspectives from co-participants, as well as the case study exercises.
Data Protection Officer
Structured training materials
We transform complex concepts into clear visuals. Our custom Miro boards and 20+ useful materials including guidelines, diagrams, checklists, and regulatory documents remain accessible indefinitely after the course.
The topic that stood out the most to me was AI risk assessments. The course offered structured and prescriptive guidance on risk assessment, impact assessments, and conformity assessments workflows, along with their applicability.
Sr Cloud Architect
Connecting with like-minded people and networking
Collaboration with people who share your interests motivates and helps achieve better results.
After the course, you’ll have a chat with participants and trainers. You can ask questions there at any time. Some people come back to clarify the interpretation of specific regulations, find an employee, or ask for career advice.
I think I was generally very happy with the structure of the training and the approach to the EU AI Act, including the cross-references to national-level contexts. Some of the students also brought valuable insights from their own national jurisdictions.
Data Privacy Specialist
How the training course works
Onboarding
A week before the course starts, you’ll join a chat with other participants and trainers. Participants often return to this chat to ask questions that arise in their practice and hear expert opinions. On day one, we spend 30 minutes on organizational questions and platform introduction before diving into the first module.
Pre-Reading Materials
Before each session, you’ll receive curated materials to review. These aren’t lengthy documents — they’re streamlined summaries that prepare you for the upcoming class. This approach allows sessions to build on foundational knowledge rather than starting from scratch, making class time more productive and interactive.
Theory Sessions
Each 2-hour class begins with a 10-minute Q&A on pre-reading materials, followed by 50–60 minutes of theory. Trainers present core concepts, legal frameworks, and compliance strategies. The theory builds directly on what you’ve read, ensuring deeper understanding and leaving more time for practical application during the session.
Hands-On Practice
Every module includes 25–35 minutes of active practice during the session. You’ll work on case analyses, group discussions, mini-tasks, or workshops. You’ll apply what you’ve learned to scenarios that reflect actual AI compliance challenges you might face in your work.
Homework
Practical assignments are given only after Wednesday sessions, giving you the Wednesday-to-Monday gap to complete them. This spacing ensures you have enough time to work through tasks without overwhelming your schedule, especially if you’re balancing the course with full-time work or other commitments.
Final Assessment
The course concludes with a comprehensive test covering all modules. The assessment is designed to verify your understanding of AI Act requirements and compliance strategies. Upon successful completion, you’ll receive an official electronic certificate via Accredible that you can share on LinkedIn and your CV.
Get a structured reference guide for practical readiness
All AICP-E training participants receive a free AI Handbook. It’s not a theoretical textbook—it’s a practical tool for decision-making in real-world AI Act applications. The Handbook answers “What do I do when…?” rather than “What does Article X say?”
Course Details
- Live online
- 4 weeks / 10 classes
-
Monday and Wednesday, 18.00-20.00 CET
If you’re not able to attend classes, we’ll provide their recordings. - Group work with practical tasks
- Self-assessment tests
- In-depth work on AI use case with trainers
- AI Governance Handbook with typical compliance scenarios
All of this is included in a single price — €1050. We don’t sell packages where you have to pay extra for the certificate or expert support.
Program
Module 1 — Foundations of AI
- Common terms, concepts and definitions: AI, AI system, AI model, General Purpose AI model, Artificial General Purpose Intelligence, Weak AI or Symbolic AI, Deep Learning, Machine Learning, Neural Network, Large Language Model;
- The problem of the definition of AI system in the EU AI Act;
- The downside of the EU AI Act AI systems definition.
Module 2 — Regulating AI
- The problems of regulating AI;
- What is Regulated;
- The Problem of Deregulation;
- Regulations: Main Global Approaches;
- Risks and Risks Taxonomy;
- Practical Use Cases.
Module 3 — Introduction to the European Union AI Act
- A Europe Fit for the Digital Age;
- Purposes & Selected Definitions;
- Territorial & Material Scope;
- Structure;
- Timelines;
- Players (Operators): Roles and Subjects — Provider, Importer, Distributor, Deployer.
Module 4 — Scope of Application
- Covered Technologies;
- Self-Assessment;
- Regulatory and Ethical Aspect;
- AI System Impact Assessment;
- Fundamental Rights and Algorithms Impact Assessment;
- ISO/IEC 23894:2023;
- ISO 42001:2023.
Module 5 — Legal Regimes of the AIA
- Prohibited AI systems;
- Transparency Risk AI systems and obligations of providers and deployers;
- Minimal or No Risk AI Systems.
Module 6 — High Risk AI Systems
- Classification of AI systems as high-risk;
- AI Act Assessment (ALTAI);
- Conformity Assessment;
- Fundamental Rights Assessments.
Module 7 — General Purpose AI Models
- Concept of General Purpose AI Model;
- Authorised representatives of providers of general-purpose AI models;
- Defining of Systemic Risk;
- General Purpose AI Models without Systemic Risk;
- General Purpose AI Model with Systemic Risk;
- Mutual assistance, market surveillance and control of general-purpose AI systems.
Module 8 — Measures for Supporting Innovation
- AI Regulatory sandboxes;
- Measures for providers and deployers, in particular SMEs, including start-ups;
- Derogations for specific operators.
Module 9 — Conformity Assessment Bodies, Governance, Liability and Enforcement
- Conformity Assessment Bodies for High-Risk AI systems, Conformity Assessment and Certificates;
- Governance on the EU Level: AI Office, European AI Board, Advisory Forum, Scientific Panel of Independent Experts, Testing Support Structures;
- Domestic Governance: Market Surveillance Authorities, Notifying Authorities, Notification Bodies; other authorities (Authorities Protecting Fundamental Rights, Data Protection Authorities, Courts);
- Market surveillance and control of AI systems in the Union market;
- Administrative Liability under the AI Act: categories of fines and enforcement mechanism;
- Remedies and right enforcement procedure.
Module 10 — Data Governance and Personal Data in the AI&ML Lifecycle
- When Data Protection legislation applies to AI&ML Lifecycle;
- Data Protection Principles and AI Challenges;
- Purpose Limitation and Personal Data Reuse: Compatibility Assessment, data reuse by processor;
- Legal Grounds for Personal Data Processing: Legitimate Interest Problem and Pitfalls of Consent;
- AI and Transparency Principle;
- AI and Data Processing Impact Assessment;
- AI and Automated Decision Making;
- AI and Privacy by Design and by Default;
- AI and Personal Data of Special Categories.
Schedule
- June 22 → July 22
The course is held during your vacation or a busy work month?
We’ve designed the program with a flexible schedule in mind. All online lectures are recorded. Tests are available at any time. Missed a Q&A session with the trainer? We’ll record it, and the trainer will answer your questions in the chat or during the next session. Along with video recordings of the sessions, you’ll receive a text recap of the meeting for easy review.
By the end of this course, you will be able to:
Assess the applicability of the AI Act to your specific use cases and operational framework.
Evaluate the risk level of your AI system and develop a comprehensive risk analysis.
Identify your role as an AI operator and understand the corresponding legal obligations.
Refine your DPIA and Fundamental Rights Impact Assessment to ensure compliance.
Become a certified AI compliance professional
Our course provides an official certificate validated by our company and signed by expert trainers. This certificate is awarded only after successful course completion.
The certificate is easily shared on LinkedIn and CVs, as it’s created in Accredible — an online platform for credential management.
Need expert help fulfilling AI systems compliance requirements?
Get a free AI compliance gap assessment for one AI system under EU AI Act requirements from our experts.
During a 45-minute interview, we’ll assess the system and create a complete compliance report. The report will determine whether the Act applies to your service, classify the risk level, and outline a plan to close any gaps.
Learn from real AI Governance professionals
The lead trainers of our courses are the same specialists who developed the program. Each trainer specializes in their own topic. You receive a coherent approach to applying requirements—not a collection of disconnected opinions.
Turn AI compliance into a team skill
We can bring practical privacy and AI compliance training to your whole organization. From 45‑minute e-learning to live expert sessions and tabletop simulations, we tailor the program to your business context so your team can apply the rules in real work, not just know them.
That’s what our clients say
Become a high-demand specialist in AI Act compliance
Getting training should be simple, which is why we give you two ways to partner with us:
Get started in seconds
The fastest way to join the AICP-E program is through our secure online checkout. Simply enter your billing details, and pay via Stripe. We do not store your card details and retain statutory 14-day right of withdrawal from the date of payment for individual clients.
Request a Consultation
If you have specific questions about the curriculum or need a formal proposal for your organization, you can select applying by request. Our team will reach out to discuss your professional goals and provide a customized offer. This path is ideal for corporate teams or those requiring manual administrative procedures. Once the details are finalized, we will manually guide you through the registration process.
Become a high-demand specialist in AI Act compliance
Fill in the form and get a free consultation.
- Implementation of 7+ legal frameworks.
- Individual and corporate trainings on GDPR, EU AI Act and international standards.
- Development of personal data protection and responsible AI systems within organizations.
- Custom services upon request.
From Payment to Onboarding
Once your payment is successful, your journey to becoming a certified AI compliance professional begins. After payment you will receive an immediate email containing your registration confirmation and electronic invoice. One week before the course begins, you will be invited to join a dedicated chat with trainers and fellow participants to start networking.
You can learn more in our Terms of Service.
Learn more about AI compliance
Frequently Asked Questions
What is the Artificial Intelligence Act (EU AI Act)?
The EU AI Act is a regulation introduced by the European Union to ensure the safe and ethical use of Artificial Intelligence (AI) systems. It categorizes AI systems based on their risk levels and provides a comprehensive framework for the regulation of high-risk AI systems to protect fundamental rights, privacy, and ensure transparency. This legislation applies to all AI systems operating within the EU, regardless of their origin. Learn more about the EU AI Act and its implications here.
What is an AI risk management framework, how risk-based approach is covered in the training?
Yes, our training covers the AI risk management framework in detail. The framework outlines how to assess, monitor, and mitigate the risks posed by AI systems, focusing on issues such as data governance, transparency, and human oversight. It ensures that AI systems are developed and deployed in a manner that mitigates any risks to personal data protection and fundamental rights. Learn more about AI risk management here.
How does the AICP-E training differ from AI Literacy programs?
AI literacy is a basic program aimed at helping entire teams understand how to use AI safely. It provides foundational knowledge on AI technologies, risks, and their ethical use. On the other hand, our Artificial Intelligence Compliance Professional for Europe (AICP-E) course is designed for professionals who want to directly engage with AI governance, compliance, and work alongside technical teams. This program equips participants with the skills necessary to develop and manage AI systems that comply with the EU AI Act, with a specific focus on risk management, documentation, and ethical considerations.
If you need to provide your team with necessary knowledge on conscious AI usage, learn about our corporate training programs here.
Do participants receive an EU AI Act compliance certificate after completing the course?
Yes! Upon successful completion of the Artificial Intelligence Compliance Professional for Europe (AICP-E) course, participants will receive a certificate that validates their understanding and readiness for AI compliance within the EU regulatory framework. This certificate is recognized in the industry and can be shared on platforms like LinkedIn, enhancing your professional credentials. The certificate is issued via Accredible, an online platform that facilitates easy sharing of credentials.
Does the program cover responsible AI?
Absolutely. The AICP-E program focuses on responsible AI, emphasizing ethical AI development, data protection, transparency, and human oversight. You’ll learn how to create AI systems that align with AI governance frameworks and meet the legal requirements outlined in the EU AI Act. Responsible AI is at the heart of this training, ensuring that the systems you work on are not only compliant but also ethically sound.
Can I access session recordings if I miss a class?
Yes, session recordings will be available to all participants. If you miss a live session, you can catch up with the recordings at your convenience. This ensures you won’t miss any essential content.
Will this course help me apply AI compliance regulations within my organization?
Yes, this course is designed to provide practical knowledge that can be directly applied within your organization. You’ll learn how to assess AI systems for compliance, implement risk management strategies, and ensure that all AI models used within your organization adhere to the EU AI Act’s regulations.
