Privacy & Artificial Intelligence: EU AI Act Overview
- AI, Artificial Intelligence, Personal data, Privacy
- 18/12/2024
This article analyses EU Artificial Intelligence Act (EU AI Act), focusing on its implications for AI/ML engineers.
Understanding the Scope of the EU AI Act
The EU Artificial Intelligence Act’s scope extends beyond the geographical boundaries of the EU. It applies to any AI system used within the EU or whose use affects individuals within the EU, regardless of where the system is developed or deployed. This broad scope aims to prevent loopholes and ensure that AI systems used to target or impact EU citizens are subject to regulation, even if operating from outside the EU.
Unpacking the Risk-Based Approach
The AI Act utilises a risk-based approach to regulate AI systems, categorising them according to the level of risk they pose. This approach ensures that the regulatory burden is proportionate to the potential harm AI systems might cause.
1. Prohibited Practices: Addressing Unacceptable Risks
The AI Act outright bans certain AI practices deemed to pose an unacceptable risk to fundamental rights and values.
“If we are advising businesses now, it is clear that if the development cycle of some system has already begun and it turns out that it is included in the list of prohibited [practices] in the European Union, then this means that we must immediately exclude a whole mass of jurisdictions from this sales market.”
Alexander Tyulkanov, LL.M., CIPP/E, FIAAIS.
Article 5 of the AI Act outlines these prohibited practices, which include:
📎 Manipulative AI Systems: Systems that exploit human vulnerabilities using subliminal techniques or intentionally deceptive practices to significantly distort behaviour, leading to harm.
📎 Social Scoring: Systems that assign individuals a score based on their social behaviour or predicted personality traits, potentially leading to discrimination and social exclusion.
📎 Exploitation of Vulnerable Groups: AI systems that specifically target individuals based on their age, physical or mental disability, or other vulnerabilities to materially distort their behaviour in a way that causes them or others harm.
📎 Real-time Remote Biometric Identification Systems in Publicly Accessible Spaces for Law Enforcement Purposes: With limited exceptions, the use of these systems for mass surveillance is prohibited due to concerns over privacy and potential for misuse.
2. High-Risk AI Systems: Robust Requirements for Sensitive Applications
AI systems classified as “high-risk” are those used in sectors with significant potential to impact fundamental rights or safety. These sectors include:
📎 Critical infrastructure: Energy, transport, water, etc.
📎 Educational and vocational training: Access to and enjoyment of education and training.
📎 Employment, worker management, and access to self-employment: Recruitment, promotion, task allocation, performance evaluation, etc.
📎 Essential private and public services: Access to and enjoyment of essential services like banking, credit scoring, social security, healthcare, etc.
📎 Law enforcement: Risk assessment of individuals, crime prediction, evidence analysis, etc.
📎 Migration, asylum, and border control: Verification of authenticity of travel documents, risk assessments for migration or asylum purposes, etc.
📎 Administration of justice and democratic processes: Assisting judges in legal fact-finding and interpretation of the law, influencing electoral campaigns, etc.
Developers and deployers of high-risk AI systems face stringent requirements under the AI Act. These include:
📎 Risk Management System: Establishing a comprehensive system to identify, assess, and mitigate risks throughout the AI system’s lifecycle.
📎 Quality Management System: Implementing a robust quality management system to ensure the AI system meets high standards of accuracy, reliability, and robustness.
📎 Data Governance: Ensuring data quality and implementing appropriate data governance practices, including data minimisation and security measures.
📎 Technical Documentation: Preparing detailed technical documentation outlining the AI system’s design, functionality, and intended use.
📎 Logging and Monitoring: Implementing logging mechanisms to record the AI system’s operations and monitoring its performance to detect anomalies and potential biases.
📎 Transparency and Explainability: Providing concise, intelligible, and accessible information about the AI system’s functionality and decision-making processes.
📎 Human Oversight: Ensuring meaningful human involvement in critical decision-making processes involving the AI system.
📎 Conformity Assessment: Undergoing conformity assessment procedures, which may include internal checks, third-party audits, or certification, depending on the specific AI system.
📎 Registration: Registering the high-risk AI system in the EU database to enhance transparency and oversight.
📎 Post-Market Monitoring: Continuously monitoring the AI system’s performance and impact after deployment to identify and address any emerging risks or issues.
3. Limited and Minimal Risk AI Systems
AI systems falling under the “limited risk” category are subject to transparency obligations, primarily focused on ensuring users are aware they are interacting with an AI system. For example, chatbots must clearly identify themselves as machines to users.
The majority of AI systems are expected to fall under the “minimal risk” category and are not subject to specific regulations under the AI Act.
Clarifying Roles and Responsibilities
The AI Act distinguishes between different roles and responsibilities within the AI lifecycle, assigning specific obligations to each:
📎 Provider: The entity responsible for developing and placing the AI system on the market.
📎 Deployer: The entity responsible for using the AI system in a specific context.
The specific obligations of each role vary depending on the risk category of the AI system.
“The first thing we must do for understanding is to distinguish between models and systems, and then for each of them there is a certain classification and legal regime.”
Alexander Tyulkanov, LL.M., CIPP/E, FIAAIS.
Navigating the Intersection of AI and Data Protection
There is a kind of interplay between the AI Act and the General Data Protection Regulation (GDPR). You need to consider both regulations when developing and deploying AI systems that process personal data.
Key takeaways regarding data usage and privacy include:
📎 Data Minimisation: Developers should strive to collect and process only the data strictly necessary for the AI system’s intended purpose.
📎 Lawful Basis for Processing: A valid legal basis under the GDPR is required for processing personal data for AI development or deployment.
📎 Transparency and Data Subject Rights: Individuals must be informed about the use of their data for AI purposes, and their data subject rights under the GDPR, such as access, rectification, and erasure, must be respected.
📎 Anonymization and Pseudonymization: Techniques to de-identify personal data can help mitigate privacy risks while still allowing for AI development.
Mitigating Bias and Discrimination of AI System
The AI Act aims to prevent the perpetuation or amplification of societal biases through AI systems. This is a crucial aspect of responsible AI development, and the discussion highlights the challenges and potential solutions:
📎 Diverse and Representative Datasets: Training datasets must be carefully curated to ensure they represent the diversity of the population the AI system is intended to serve.
📎 Bias Detection and Mitigation Techniques: Developers should employ techniques to detect and mitigate biases throughout the AI system’s lifecycle, including during data collection, model training, and deployment.
📎 Transparency and Explainability: Understanding how the AI system arrives at its decisions is crucial for identifying and addressing potential biases.
📎 Human Oversight: Human involvement in critical decision-making processes can help ensure fairness and prevent discriminatory outcomes.
The Importance of Standardization and Certification
The development and adoption of technical standards play a crucial role in facilitating compliance with the AI Act. Standardization provides clear guidelines and benchmarks for developers, while certification schemes can offer independent verification of AI systems’ compliance with the Act’s requirements.
“The most important stage from a practical point of view now is standardisation…because the possibility of confirming compliance … with the requirements of the regulation is very often tied to compliance with technical requirements that are spelled out in standards.”
Alexander Tyulkanov, LL.M., CIPP/E, FIAAIS.
Conclusion: Embracing Ethical AI Development with Minimal Risk for Privacy
AI/ML engineers need to proactively engage with the EU AI Act and integrate ethical considerations into their work. Understanding the Act’s provisions, adopting a risk-based approach, ensuring responsible data usage, mitigating biases, and prioritizing human oversight are crucial steps towards building trustworthy and beneficial AI systems.
As the field of AI continues to rapidly evolve, the EU AI Act represents a significant step towards regulating this powerful technology and harnessing its potential while safeguarding fundamental rights and values. By embracing ethical AI development principles, AI/ML engineers can contribute to a future where AI benefits society as a whole.
Contact us
Fill in the form and we will contact you as soon as possible!