The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 

Anthropic Pushes for Regulations as Britain Launches AI Testing Platform 

DATE POSTED:November 6, 2024

The U.K. government unveiled a standardized digital platform for AI safety verification this week, projecting a 6.5 billion-pound market by 2035, while AI developer Anthropic warned that governments have just 18 months to implement effective regulation before risks escalate. The calls for swift oversight come as the Brooks Tech Policy Institute proposed a new regulatory framework dubbed the “SETO Loop,” designed to help policymakers tackle mounting concerns about AI governance.

Anthropic Calls for Swift AI Regulation

Artificial intelligence company Anthropic is urging governments to implement targeted regulation within the next 18 months, warning that the window for proactive risk prevention is closing as AI capabilities rapidly advance.

“Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks,” the company stated in a policy paper.

The company reports dramatic improvements in AI capabilities, with performance on software engineering tasks jumping from 1.96% (Claude 2, October 2023) to 49% (Claude 3.5 Sonnet, October 2024). Its internal Frontier Red Team has found that current models can assist with cyber offense-related tasks. At the same time, testing by the U.K. AI Safety Institute shows some models matching PhD-level expertise in biology and chemistry.

The AI developer advocates for a three-pronged regulatory approach: transparency about safety policies, incentives for better security practices, and simple, focused rules. The company points to its own Responsible Scaling Policy, which has been in place since September 2023, as a potential model.

While federal legislation would be ideal, Anthropic suggests state-level regulation may be necessary if Congress moves too slowly, citing California’s earlier attempt with the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

UK Launches AI Safety Platform

The U.K. government rolled out a one-stop digital platform that gives businesses standardized tools to test their AI systems for bias, privacy risks, and safety concerns, marking Britain’s first centralized approach to artificial intelligence verification.

“AI has incredible potential to improve our public services, boost productivity and rebuild our economy but, in order to take full advantage, we need to build trust in these systems which are increasingly part of our day to day lives,” Science and Technology Secretary Peter Kyle said in a news release.

The platform arrives as officials project the national market for AI assurance will grow sixfold to 6.5 billion pounds by 2035. Currently, 524 firms operate in Britain’s AI assurance sector, employing more than 12,000 people and generating over 1 billion pounds in revenue.

Alongside the platform launch, the government opened a public consultation focused on making AI verification accessible to small and medium-sized enterprises through a self-assessment tool. The initiative coincides with a new partnership between the U.K. AI Safety Institute and Singapore, signed in London by Kyle and Singapore Minister for Digital Development and Information Josephine Teo.

The Institute recently launched a 200,000 pound grant program for AI safety research and will participate in the first meeting of the International Network of AI Safety Institutes in San Francisco this month.

Think Tank Proposes New AI Regulatory Framework

Researchers have developed a new framework for regulating artificial intelligence, aiming to help policymakers tackle growing concerns about AI oversight.

The approach developed by Cornell University’s Cornell Brooks School Tech Policy Institute dubbed the “SETO Loop,” breaks down AI regulation into four key steps: identifying what needs protection, assessing existing regulations, selecting appropriate tools and determining which organizations should enforce them.

“Formulating AI regulation first requires identifying the social, economic, and political problems posed by the technology. The aim should be to preclude malicious use of AI rather than cause market failures by preventing provision of the service,” the report states.

The framework, developed by researchers Sarah Kreps and Adi Rao, arrives as AI transforms multiple sectors. It examines several case studies, including autonomous weapons systems, facial recognition technology and health and surgery robotics.

The report advocates for a “technoprudential” approach that balances AI’s benefits with its risks. It emphasizes the need for dynamic regulation that can adapt to rapid technological changes while coordinating across international, federal and local levels.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Anthropic Pushes for Regulations as Britain Launches AI Testing Platform  appeared first on PYMNTS.com.