The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 

Navigating the EU AI Act: Could Minor Customizations Like RAG Turn General-Purpose AI into…

DATE POSTED:September 29, 2025
Navigating the EU AI Act: Could Minor Customizations Like RAG Turn General-Purpose AI into High-Risk or Prohibited Systems?

In the rapidly evolving world of artificial intelligence, the European Union’s AI Act stands as a landmark regulation, aiming to balance innovation with the protection of fundamental rights, safety, and ethical standards. Officially known as Regulation (EU) 2024/1689, this regulation categorizes AI systems based on their risk levels, imposing varying degrees of oversight and obligations. But what happens when general-purpose AI models (GPAI) — think large language models like ChatGPT, Grok, or DeepSeek — are lightly customized using techniques such as Retrieval-Augmented Generation (RAG)? Could these tweaks, without deep fine-tuning or substantial modifications, push an AI application into prohibited or high-risk territory, especially if tailored for sensitive uses like harmful manipulation, deception, or employment management?

Grok

This article dives into the nuances of the EU AI Act, exploring how minor customizations might alter an AI system’s classification. We’ll break down the Act’s risk categories, examine GPAI’s special treatment, and analyze real-world implications through case studies. Whether you’re an AI developer, business leader, or policy enthusiast, understanding these shifts is crucial as the Act’s provisions roll out over the coming years.

The EU AI Act’s Risk-Based Framework: A Quick Primer

The EU AI Act adopts a tiered approach to regulation, classifying AI systems according to their potential harm to individuals, society, and the environment. This isn’t a one-size-fits-all rulebook; instead, it scales requirements based on risk, ensuring that everyday AI tools face minimal bureaucracy while high-stakes applications undergo rigorous scrutiny.

Here’s a breakdown of the main categories:

  • Unacceptable Risk (Prohibited AI): These are outright banned because they pose severe threats to fundamental rights. Examples include AI systems that deploy subliminal techniques to manipulate behavior subconsciously, exploit vulnerabilities (like age or socioeconomic status) to cause harm, or engage in deceptive practices that distort decision-making. The Act’s Article 5 lists these prohibitions explicitly, covering everything from social scoring to real-time biometric identification in public spaces (with limited exceptions for law enforcement).
  • High-Risk AI: Systems in this bucket could significantly impact safety, health, or rights if they malfunction. They must meet strict standards, including risk assessments, data quality checks, transparency, and human oversight. Annex III of the Act outlines specific use cases, such as AI in education, critical infrastructure, or — relevant to our discussion — employment and workers’ management.

This framework went into force on August 1, 2024, with phased implementation: prohibited AI rules kick in after six months, high-risk after 24–36 months depending on the sector.

General-Purpose AI: A Special Category with Unique Rules

GPAI models, which include versatile LLMs designed for a broad range of tasks, aren’t slotted into the standard risk tiers by default. Instead, they’re governed under Title III, Chapter 5 of the Act. Providers of these models — companies like OpenAI, xAI, or DeepSeek — must fulfill obligations such as providing technical documentation, ensuring compliance with copyright laws for training data, and publishing summaries of that data for transparency.

However, the Act draws a line for “GPAI with systemic risk.” These are powerhouse models trained with massive computing power (over 10²⁵ floating-point operations, or FLOPs), posing broader societal threats like misinformation or economic disruption. They face extra hurdles: model evaluations, adversarial testing, cybersecurity measures, and incident reporting to authorities.

Importantly, GPAI doesn’t require the full conformity assessment that high-risk systems do — unless it’s modified or integrated in a way that changes its nature.

What Counts as ‘Substantial Modification’?

The Act defines “substantial modification” in Article 3(14) as changes that affect an AI system’s compliance with the regulation. This could include alterations to its intended purpose, significant performance shifts, or tweaks to components that impact safety, accuracy, or fundamental rights. Fine-tuning a model by retraining its weights on new data might qualify as substantial if it fundamentally alters behavior.

But what about lighter touches? Techniques like RAG, which enhance an LLM by retrieving external data to inform responses without touching the core model weights, are generally seen as non-substantial. RAG essentially augments prompts with retrieved information, keeping the underlying GPAI intact. The Act’s recitals (e.g., Recital 111) suggest that mere integration or deployment adjustments don’t automatically trigger reclassification — unless the end-use pushes the system into a regulated category.

Here’s the catch: When you customize GPAI for a specific application, the deployer (the entity putting it into service) may become the provider of a new AI system. If that application aligns with prohibited or high-risk uses, the whole setup gets reclassified accordingly, regardless of how “minor” the customization seems.

Case Study 1: Customizing for Harmful Manipulation and Deception

Imagine deploying a Grok-like LLM with RAG to create a chatbot for a marketing campaign. You feed it company documents via RAG to generate personalized ads. Sounds innocuous? Now tweak it: The system retrieves user data to exploit psychological vulnerabilities, subtly nudging vulnerable groups (e.g., elderly users) toward harmful purchases through deceptive phrasing.

Under Article 5(1)(a) and (b), this could cross into prohibited territory. Subliminal or manipulative techniques that distort behavior and cause significant harm — like financial loss — are banned. Even without fine-tuning, the RAG-driven customization tailors the GPAI for deception, making the application unacceptable risk.

Analysis: The Act assesses the system’s intended purpose and use. If the customization enables exploitation (e.g., targeting socioeconomic weaknesses), it’s prohibited. Deployers must conduct a fundamental rights impact assessment (Article 27) and could face bans or fines up to €35 million or 7% of global turnover. Minor changes like RAG don’t exempt you; they might even amplify risks by making outputs more contextually manipulative.

Case Study 2: Tailoring for Employment and Workers’ Management

Now consider an HR tool built on DeepSeek with RAG. Without altering the model, you integrate a database of employee records. The system retrieves performance data to recommend promotions, allocate tasks, or monitor productivity — perhaps flagging “low performers” based on behavioral patterns.

This squarely hits Annex III, Point 5: AI in employment, including recruitment, termination, task allocation, and monitoring. Such systems are high-risk because they can perpetuate bias, invade privacy, or unfairly influence livelihoods.

Analysis: RAG here acts as a bridge, pulling in domain-specific data to specialize the GPAI without substantial modification. Yet, the application’s purpose reclassifies it as high-risk. The provider must ensure compliance with Chapter 3 requirements: high-quality datsets to minimize bias, logging for traceability, and human oversight to contest decisions. If the system discriminates (e.g., against protected groups), it risks non-compliance. Notably, the Act’s Recital 78 emphasizes that integrating GPAI into high-risk contexts shifts responsibility to the deployer, who must treat it as a new system.

Implications and Recommendations for AI Providers

The EU AI Act’s flexibility is a double-edged sword. Minor customizations like RAG democratize AI, enabling quick adaptations without heavy retraining. However, they can inadvertently escalate classifications if the use case ventures into sensitive areas. For harmful manipulation or deception, the result could be an outright ban; for employment tools, expect mandatory certifications and ongoing monitoring.

Key takeaways:

  • Assess End-Use Early: Before deploying, map your application against Article 5 and Annex III. Tools like RAG don’t change the model but can redefine the system.
  • Documentation is Key: Maintain records of customizations to prove they’re not “substantial.”
  • Seek Expert Advice: Consult legal experts or use the Act’s sandboxes (Article 59) for testing.
  • Global Ripple Effects: Even non-EU companies may feel the impact if serving EU users.

As AI evolves, so will interpretations of the Act. With enforcement ramping up, staying compliant isn’t just regulatory — it’s essential for trust and innovation. What do you think: Will these rules stifle creativity or foster safer AI? Share your thoughts in the comments.

This analysis is based on the current text of Regulation (EU) 2024/1689 and is not legal advice.

Navigating the EU AI Act: Could Minor Customizations Like RAG Turn General-Purpose AI into… was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.