Navigating the AI Landscape

Navigating the AI Landscape: Understanding AI Risk Management Frameworks

The advent of Artificial Intelligence (AI) systems presents transformative opportunities but also introduces complex challenges, particularly concerning health, safety, and fundamental rights. The European Union has taken a pioneering step with the AI Act. It is a key regulation designed to establish new rules for the development and use of AI systems. It’s aim is to change the approach to risk control, safety assurance, and user rights protection in the digital era. This legislation is expected to have a significant global impact on the AI industry, similar to GDPR’s influence on data protection.

AI Risks under the EU AI Act

The AI Act adopts a risk-based and lifecycle approach. It’s regulating AI systems according to their potential risk, with pre- and post-market monitoring. Its primary purpose, as set out in Article 1, is ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of FR, including democracy, the rule of law and environmental protection, against the harmful effects of the AI systems in the Union. The Act is part of the New Legislative Framework (NLF) as it’s focusing on public protection objectives and outlining basic safety characteristics for products.

The AI Act defines ‘risk’ as “the combination of the probability of an occurrence of harm and the severity of that harm,” aligning with other NLF legislation and ISO Guide 51. The legislation classifies AI systems into categories based on their risk level, with stricter regulations applied to higher-risk systems:

📎 Prohibited AI Systems: These are considered so dangerous to the rights and freedoms of individuals that their placement on the market is forbidden. Examples include AI systems that use subconscious or manipulative techniques to influence behavior resulting in harm. Classification of individuals is based on biometric or sensitive data, social scoring systems, remote biometric identification for law enforcement (with limited exceptions), and unlimited collection and recognition of photos/videos from the internet or surveillance cameras. Introducing such systems carries the most severe legal consequences.

📎 High-Risk AI Systems: These systems are permitted on the market but are subject to the most stringent regulation. This category automatically includes services used for medical purposes or in HR departments. Requirements for high-risk AI systems include mandatory registration in a publicly accessible registry, mandatory human rights impact assessment, creation of special management and control systems, and a range of technical requirements to ensure system reliability and cybersecurity. These systems require mandatory requirements and a strong system of enforcement and post-market monitoring before they can be used.

📎 Transparency Risk (Moderate/Minimal Risk) AI Systems: For these systems, there are no special detailed norms beyond basic transparency. Providers must ensure that users are informed that they are interacting with an AI system.

The AI Act specifically covers various types of risks, including health and safety risks, as well as risks to fundamental rights. This includes addressing biases likely to affect health, safety, fundamental rights, or lead to discrimination (Article 10.2(f)). It also focuses on adverse impacts on persons under the age of 18 and other vulnerable groups (Article 9.9), automation bias (e.g., users over-relying on AI output; Article 14.4(b)), risks from feedback loops in continuously learning systems (Article 15.4), and cybersecurity risks specific to AI systems (Article 15.5).

Key Challenges in AI Risk Management

Balancing AI utilization with personal data protection presents significant challenges, particularly regarding transparency and risk assessment.

📎 Transparency: Achieving transparency in AI involves disclosing information about AI services, their logic, algorithm structure, and the datasets used for training. However, providing meaningful information to data subjects about the risks of AI use can be difficult. Companies must provide information about algorithm logic in an understandable and easily accessible format, focusing on key aspects like data categories, expected outcomes, and potential consequences, rather than technical details. The level of detail required for explanations may also vary depending on the context and the group of data subjects.

📎  Risk Assessment: Data Protection Impact Assessments (DPIA) are often mandated for AI data processing. During a DPIA, it is crucial to assess the necessity and proportionality of data processing; the mere availability of AI technology does not justify its use. Companies should evaluate if AI is essential for their objectives and whether less intrusive alternatives exist. Regulators recommend that DPIAs consider not only data security risks but also the broader consequences for human rights, such as the risk of potential discrimination.

We’ve already discussed the problem of contradiction in AI regulation in article “AI Bias vs. Data Privacy: Can the EU’s Laws Find Balance?”

Specific risks to consider in the context of AI and DPIAs include:

    • Risks to data subjects from potential misuse of data contained in training datasets, particularly in the event of data leaks.
    • The risk of automated discrimination arising from embedded biases in AI systems, leading to unequal outcomes for certain groups.
    • The risk of creating false content about real people, especially with generative AI systems, which can damage reputation.
    • The risk of automated decision-making leading to unfair or inaccurate results due to bias or lack of transparency.
    • The risk of users losing control over their data, particularly in the context of large-scale data collection through web scraping.

If you’re ready to go beyond theory and gain practical skills in risk classification, impact assessments, and AI system governance, we recommend enrolling in the Artificial Intelligence Compliance Professional for Europe course.

This course is designed for privacy professionals, compliance officers, and digital product leaders who want to build or enhance their expertise in AI regulation. Learn how to evaluate risks, implement effective safeguards, and ensure your organization is fully aligned with the EU’s evolving AI framework.

Mitigating AI Risks: Strategies and Systems

To ensure compliance and manage risks effectively, the AI Act mandates a Risk Management System (RMS) and a Quality Management System (QMS) for all high-risk AI systems.

📎 Risk Management System (RMS): It is a comprehensive and continuous process covering all stages of the AI system’s lifecycle. It involves:

    • Identification and analysis of known and reasonably foreseeable risks to health, safety, or fundamental rights.
    • Estimation and evaluation of risks, including based on post-market monitoring data.
    • Adoption of appropriate and targeted risk management measures. These measures ensure: elimination or reduction of risks through adequate design and development (“safety by design”) where technically feasible; implementation of adequate mitigation and control measures for risks that cannot be eliminated; and provision of necessary information (Article 13) and training to deployers. Testing is a key component for identifying appropriate risk management measures and ensuring compliance. High-risk AI systems must be tested throughout the development process and, in any event, prior to being placed on the market or put into service. 
      Testing must be carried out against prior defined metrics and probabilistic thresholds appropriate to the intended purpose of the AI system. When implementing the RMS, providers must specifically consider the potential adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.

📎 Quality Management System (QMS): The QMS must ensure compliance with the AI Act and be documented through written policies, procedures, and instructions. It covers the entire lifecycle of AI systems, including:

    • Pre-market elements: regulatory compliance strategy, design control and verification, examination, testing, and validation of the AI systems, and technical specifications.
    • Post-market elements: quality control, reporting of serious incidents, and a post-market monitoring system.
    • Continuous elements: data management systems and procedures, the RMS itself, communication with authorities, document and record keeping (including logging), resource management (including security of supply), and an accountability framework.

Beyond these mandatory systems, additional technical and organizational measures can significantly reduce risks:

📎 Technical Measures:

    • Using synthetic data to minimize the disclosure of personal information.
    • Applying approaches like differential privacy.
    • Employing federated learning during the development phase to enhance data protection.
    • Configuring systems to extract necessary information for AI explanations, understanding personal data flow, and processing duration to compile records of processing activities.
    • Logging actions with personal data.
    • Restricting access to datasets based on the specific needs of a role.
    • Limiting access to data by processors/sub-processors and other data recipients.

📎 Organizational Measures:

    • Collecting data with a foresight for informing data subjects, ensuring the ability to notify individuals whose data will be processed by AI services.
    • Training employees on personal data protection to ensure they understand personal data, its protection requirements, and practical implementation.
    • Developing internal policies, such as information security policies and personal data protection policies.
    • Conducting regular audits to identify and rectify biases or errors that could lead to discrimination.

While standardisation in AI is active, current international standards are often partial and not fully aligned with the AI Act, particularly concerning risk definition and focusing more on organizational aspects rather than product orientation or specific requirements. Additional standards are needed to cover essential gaps and ensure sufficient prescriptiveness and clear requirements tailored to the risks addressed by the AI Act.

ai risk managment system

NIST AI Risk Management Framework

Beyond the approach implemented in the EU AI Act, other significant tools exist globally for managing AI-related risks. One such tool is the Artificial Intelligence Risk Management Framework (AI RMF), developed by the U.S. National Institute of Standards and Technology (NIST). AI RMF 1.0 is a voluntary resource designed to help organizations involved in the design, development, deployment, or use of AI systems to effectively manage the numerous risks associated with AI and to promote the trustworthy and responsible development and use of AI systems.

The NIST AI RMF is flexible, non-sector-specific, and use-case agnostic, making it applicable to organizations of all sizes and across all sectors. It aims to create a common language and understanding for managing AI risks, offering taxonomy, terminology, definitions, metrics, and characterizations of AI risk. The AI RMF is unique in that it is specifically designed to address risks that are characteristic of AI systems and not fully covered by traditional software risk management approaches. These unique risks include, for example, issues with data representativeness for training, data becoming stale or outdated, the high complexity and scalability of AI systems (many systems contain billions or even trillions of decision points), difficulties in predicting failure modes for emergent properties of large-scale pre-trained models, and increased privacy risks due to enhanced data aggregation capabilities.

AI systems are also inherently socio-technical, meaning risks can arise from the interplay of technical aspects and societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context of its deployment. Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities.

The AI RMF is divided into two main parts:

📎 Part 1 describes how organizations can frame AI-related risks and outlines the characteristics of trustworthy AI systems. These characteristics include:

    • Valid and Reliable: AI systems should be accurate, robust, and perform consistently for their intended purpose.
    • Safe: AI systems should not endanger human life, health, property, or the environment under defined conditions, and safety considerations should be applied throughout the lifecycle.
    • Secure and Resilient: AI systems should be able to withstand unexpected adverse events or changes, maintain functions, and degrade safely. Security encompasses protocols to avoid, protect against, respond to, or recover from attacks.
    • Accountable and Transparent: Trustworthy AI requires accountability, which presupposes transparency. Transparency means information about an AI system and its outputs is available to those interacting with it, promoting understanding and confidence.
    • Explainable and Interpretable: These characteristics help users understand the functionality and trustworthiness of AI systems, including their outputs, by explaining how and interpreting why a decision was made.
    • Privacy-Enhanced: Design, development, and deployment of AI systems should be guided by privacy values, addressing freedom from intrusion and control over personal information.
    • Fair – with Harmful Bias Managed: Fairness addresses issues like harmful bias and discrimination. Risk management efforts should consider that bias is broader than demographic balance and can be systemic, computational/statistical, or human-cognitive.

📎  Part 2 contains the “Core” of the framework, comprising four functions to organize AI risk management activities at their highest level:

    • GOVERN: This cross-cutting function cultivates and implements a culture of risk management within organizations, outlining processes and structures for managing risks throughout the AI system’s lifecycle. It informs and is infused throughout the other three functions.
    • MAP: This function establishes the context for framing risks related to an AI system, including understanding its intended purpose, potential positive and negative impacts, and relevant laws and norms. It helps anticipate impacts and forms the basis for the MEASURE and MANAGE functions.
    • MEASURE: This function employs quantitative, qualitative, or mixed methods to analyze, assess, benchmark, and monitor AI risks and related impacts. AI systems should be tested both before deployment and regularly during operation.
    • MANAGE: This function involves allocating risk resources to the mapped and measured risks on a regular basis, as defined by the GOVERN function. It includes plans to respond to, recover from, and communicate about incidents or events.

NIST AI RMF is designed to complement existing risk management practices by integrating them with the unique aspects of AI risks. Organizations can apply these functions based on their needs, resources, and capabilities, with the process being iterative.

Enforcement and Future Outlook

The European AI Office will serve as the main supervisory body, ensuring uniform application of the AI Act across EU member states. It will monitor compliance, develop guidelines, conduct investigations, impose sanctions, coordinate global AI efforts, and offer business consultations.

The AI Act outlines a clear framework of administrative fines for non-compliance:

    • For placing a prohibited AI system into operation or on the market, fines can reach up to €35 million or 7% of the company’s total worldwide annual turnover, whichever is higher.
    • Violations of other obligations stipulated in the AI Act (not related to prohibited systems) can incur fines of up to €15 million or 3% of the total worldwide annual turnover, whichever is higher.
    • Providing incorrect, incomplete, or misleading information to a supervisory authority regarding an AI system can result in fines of up to €7.5 million or 1% of the total worldwide annual turnover, whichever is higher.

Useful Resources for Practical AI Compliance

If you’re looking to turn regulatory knowledge into actionable strategies, don’t miss our curated materials. Start with the Mini-Guide: 5 Key Aspects of AI Compliance for a concise overview of the core areas every AI compliance strategy should cover. Then, move on to the EU AI Act Compliance Checklist to assess your organization’s readiness step by step. These resources are designed to help privacy professionals, compliance officers, and AI developers align with the EU’s risk-based regulatory framework efficiently and effectively.

It is important to note that the AI Act is actively applied in conjunction with other normative acts, including GDPR, especially concerning the use of personal data for AI training.

In conclusion, the EU AI Act marks a pivotal moment in AI regulation, emphasizing a robust, risk-based approach to foster trustworthy and human-centric AI. Proactive adaptation to these new rules, through comprehensive risk management, stringent quality controls, and diligent data protection practices, will be crucial for businesses to navigate the evolving AI landscape successfully and avoid significant penalties.

Contact us

Fill in the form and we will contact you as soon as possible!

A full guide on General Data Protection Regulation or GDPR for short. Here you’ll learn what is personal data, what are the rights of subjects, how to comply with the regulation.

Contact Sales

Learn what Data Privacy Office Europe can do for you.

Fill out the form and we will contact you as soon as possible!