The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

Australia and US Propose Oversight as Industry Urges California AI Bill Signing

DATE POSTED:September 11, 2024

Artificial intelligence experts are advocating for safety legislation in California, while Australia is introducing a national AI regulation plan, marking a worldwide trend toward increased AI governance.

Simultaneously, the U.S. Commerce Department is proposing new reporting requirements for advanced AI models, underscoring the mounting emphasis on harmonizing innovation with security considerations in the AI field.

AI Experts Urge Newsom to Sign California AI Safety Bill

Over 100 employees from AI companies are urging California Gov. Gavin Newsom to sign Senate Bill 1047, citing concerns about potential risks posed by AI models.

Signatories include employees from OpenAI, Google DeepMind, Anthropic, Meta and xAI. Supporters include Turing Award winner Geoffrey Hinton and University of Texas professor Scott Aaronson.

SB 1047 introduces safety testing requirements for AI companies developing models that cost more than $100 million or those using substantial computing power. It also mandates that AI developers in California establish fail-safe mechanisms to shut down their models in case of emergencies or unforeseen consequences.

“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” a Monday (Sept. 9) statement from the employees said. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”

The diverse group of signatories includes current and former employees from various technical and policy roles. Some opted to remain anonymous.

If signed, the bill would position California as a leader in AI regulation amid growing concerns about the technology’s rapid advancement.

Australia Unveils AI Safety Plan

The Australian government announced two initiatives Thursday (Sept. 5) aimed at enhancing AI safety and regulation in the country.

Industry and Science Minister Ed Husic released a proposal paper outlining potential mandatory guardrails for AI in high-risk settings and a new voluntary AI safety standard for businesses.

The proposal paper, open for public consultation until Oct. 4, presents three regulatory options, including adapting existing frameworks; introducing new framework legislation; or creating an AI-specific law similar to the European Union’s AI Act.

Key elements include a proposed definition of high-risk AI and 10 mandatory guardrails. The government is considering various approaches to implement these safeguards across the economy.

Simultaneously, the government introduced a voluntary AI safety standard, providing immediate guidance for businesses using high-risk AI applications.

“Australians want stronger protections on AI, we’ve heard that, we’ve listened,” Husic said in a statement Thursday. “From today, we’re starting to put those protections in place.”

The Tech Council of Australia estimated that generative AI alone could contribute between $45 billion and $115 billion annually to Australia’s economy by 2030.

Husic emphasized the importance of building trust to encourage AI adoption.

“We need more people to use AI, and to do that, we need to build trust,” he said in the statement.

The government plans to update the voluntary standard over time to align with evolving best practices in other jurisdictions, including the EU, Japan, Singapore and the United States.

Commerce Department Proposes AI Reporting Rules

The U.S. Department of Commerce unveiled Monday a proposed rule requiring companies developing advanced AI models to report critical information to the government. The move aims to enhance oversight of potential dual-use AI technologies that could impact national security.

Under the proposal, firms conducting AI model training runs exceeding 10^26 computational operations or possessing large-scale computing clusters would be subject to quarterly reporting requirements. Companies would have to disclose ongoing and planned activities related to dual-use foundation models, including cybersecurity measures and ownership of model weights.

The rule, stemming from President Joe Biden’s October 2023 executive order on AI, seeks to ensure the U.S. industrial base is prepared to support national defense while mitigating potential risks associated with advanced AI systems.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Australia and US Propose Oversight as Industry Urges California AI Bill Signing appeared first on PYMNTS.com.