The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 

Meta sets limits on AI releases, choosing to avoid ‘risky’ systems

DATE POSTED:February 4, 2025

Meta just dropped a new policy document, and it suggests that there might be situations where they choose not to release a powerful AI system they’ve built in-house. It appears that they’re setting some ground rules for when keeping an AI under wraps might be the right move.

The tech firm has broken these high-risk AI models into two categories, which are “high risk” and “critical risk.”

High-risk AI includes powerful AI models that could be used to plan or carry out cyberattacks or even aid in the development of chemical and biological weapons. The systems don’t necessarily guarantee an attack’s success, but they make things way easier for bad actors.

Critical-risk AI takes things a step further. These are AI systems that not only fall into the high-risk category but could also enable catastrophic attacks, ones that, if launched, can’t be stopped or countered effectively. Think fully automated cyberattacks targeting even the most secure companies or AI-driven tech that makes biological weapons more accessible. Essentially, these are worst-case scenario models that could cause major global damage if misused.

In the document, the company states: “If a frontier AI is assessed
to have reached the critical risk threshold and cannot be mitigated, we will stop development.”

The image shows a table that outlines four criteria used to define risks under a specific framework. The first criterion is plausible, which refers to the ability to identify a causal pathway to a catastrophic outcome with definable and simulatable threat scenarios, ensuring an evidence-based, actionable approach. The second is catastrophic, meaning the outcome would result in large-scale, devastating, and potentially irreversible harm. The third is net new, where the outcome cannot currently be realized with existing tools, costs, or by current threat actors. The fourth is instantaneous or irremediable, referring to outcomes where catastrophic impacts are immediate or inevitable due to a lack of feasible measures to mitigate or reverse them. These criteria work together to assess and manage risks effectively.Meta outlines four criteria used to define risks under a specific framework. Credit: Meta

If Meta decides an AI system falls into the high-risk category, it says it will limit access internally and won’t release it until it can implement safeguards to bring the risk down to a more manageable level. But if a system is classified as critical-risk, Meta plans to pause development entirely and put security measures in place to prevent leaks until they figure out how to make it safer.

Meta’s open-source AI strategy risk issue

Experts see this as Meta’s way of responding to criticism over its open-source AI strategy, according to TechCrunch. While Meta has been pushing for more openness with its Llama models, that approach has raised concerns, especially after reports surfaced that a US geopolitical adversary allegedly used Llama to develop a military chatbot.

At the same time, this move might also be Meta’s response to the rise of China’s DeepSeek, an AI developer that releases its models fully open-source but without strict security guidelines.

Featured image: Canva

The post Meta sets limits on AI releases, choosing to avoid ‘risky’ systems appeared first on ReadWrite.