The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

EU AI Act: Understanding the Risk Management System in Article 9

DATE POSTED:July 17, 2025

European Union (EU) Artificial Intelligence (AI) Act, the first most comprehensive regulation on AI, builds a framework with rules for high-risk AI systems to protect health, safety, and fundamental rights. One element of this framework is Article 9: Risk Management System — a mandatory, proactive approach for providers of high-risk AI. This isn’t just bureaucracy; it’s a dynamic blueprint for more manageable AI system based on risks.

If you’re a developer, provider, or stakeholder in AI, grasping Article 9 is crucial. It mandates a continuous, iterative process to identify, assess, and mitigate risks throughout an AI system’s lifecycle. Drawing from the Act’s provisions (including references to related articles like 72 and 60), let’s break it down into a clear concept with its elements.

Grok

What is the Risk Management System?

The Risk Management System (RMS) under Article 9 is essentially a structured concept that providers of high-risk AI systems must establish and maintain. It applies exclusively to high-risk AI — think biometric identification, credit scoring, or critical infrastructure management — excluding prohibited AI system or low/minimal-risk systems.

The core idea? Risks aren’t one-off concerns. The RMS is a continuous and iterative process that spans the entire lifecycle: from development and deployment to post-market monitoring. It’s not static paperwork; it’s an active, adaptable process. As regulated in the Act, it’s a cyclical loop, ensuring risks are managed proactively rather than reactively.

Key Elements of the Risk Management System

EU AI Act Article 9 outlines a robust set of components, each building on the last. Let’s dissect them step by step, based on the Act’s paragraphs.

  • Establishment of a Formal System (Article 9(1)): Providers must create a documented RMS with clear policies, procedures, and responsibilities. This isn’t optional — it’s a foundational requirement for compliance. Think of it as your AI’s “safety manual”: it details how risks will be handled from day one. The system must be implemented actively, with regular maintenance to adapt to changes like technological updates or new regulations.
  • A Continuous, Iterative Process (Article 9(2)): The RMS isn’t a checkbox exercise — it’s ongoing. It runs parallel to the AI’s lifecycle and includes1 four core steps:
  1. Identification and Analysis of Risks: Spot known and foreseeable risks to health, safety, or fundamental rights when the AI is used as intended.
  2. Estimation and Evaluation of Risks: Gauge the likelihood and severity of these risks, including under reasonably foreseeable misuse.
  3. Post-Market Monitoring: As per Article 72, collect real-world data after deployment to uncover emerging risks.
  4. Adoption of Measures: Implement targeted fixes, from redesigns to user warnings.

This iterative nature means reviewing and updating regularly — perhaps quarterly or after incidents — to keep risks in check.

  • Scope of Risks and Actionable Focus (Article 9(3)): Not all risks are equal. The RMS targets only those that can be reasonably mitigated or eliminated through design, development, or by providing technical info to users. If a risk is beyond control (e.g., global economic factors), it’s out of scope. This keeps efforts practical and focused on what providers can influence.
  • Designing Effective Measures (Article 9(4)): Risk measures don’t exist in isolation — they must align with other AI Act requirements, like accuracy, robustness, and cybersecurity. For instance, enhancing data quality might reduce bias risks while boosting overall performance.
  • Ensuring Acceptable Residual Risks (Article 9(5)): After mitigation, some “residual” risks may linger — but they must be judged acceptable. Providers achieve this by:
  1. Eliminating or Reducing Risks: Through safe-by-design principles in development.
  2. Mitigation and Controls: For unavoidable risks, add safeguards like fail-safes or monitoring tools.
  3. Information and Training: As per Article 13, provide deployers with clear instructions, considering their technical expertise and the AI’s context.

Special attention goes to deployers’ knowledge levels — novice users might need more guidance than experts.

  • Testing for Compliance and Performance (Article 9(6–8)): Testing is the proof in the pudding. High-risk AI must undergo rigorous evaluations to:
  1. Identify optimal risk measures.
  2. Ensure consistent performance against Act standards (e.g., accuracy thresholds).

This includes real-world testing (per Article 60), simulating actual scenarios to validate behavior. Timing is key: tests happen throughout development and pre-market, using predefined metrics and thresholds tailored to the AI’s purpose. No passing these? Back to the drawing board.

  • Protecting Vulnerable Groups (Article 9(9)): AI isn’t neutral — it can disproportionately affect certain people. Providers must assess impacts on children under 18 (noted as “under 10” in some interpretations, but broadly vulnerable groups) or others like the elderly or disabled. Tailored measures, such as age-appropriate interfaces or bias checks, are required to safeguard them.
  • Integration with Existing Processes (Article 9(10)): For organizations already under EU risk regs (e.g., banks via financial laws), Article 9 allows blending into current systems. No need to reinvent the wheel — efficiency is encouraged to avoid redundancy.

As hinted in the Act, visualize the RMS as a cyclical process illustrate via Figure below. It starts with risk identification, flows into evaluation and mitigation, incorporates post-market data, and loops back for refinement. Imagine a wheel: Development spins into deployment, monitoring gathers momentum, and updates keep it rolling smoothly. Sources like Fraunhofer publications (though URLs can be tricky in docs) emphasize this lifecycle view, highlighting its role in sustainable AI.

Created by the author.

Why This Matters in 2025 and Beyond

As we hit mid-2025, the AI Act is in full swing, with enforcement ramping up. Implementing a solid RMS isn’t just about avoiding fines — it’s about building AI systems that gain trust. For providers, it’s a competitive edge; for society, it’s protection against unintended harms.

What are your thoughts on balancing innovation with risk management? Drop a comment below — I’d love to hear!

EU AI Act: Understanding the Risk Management System in Article 9 was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.