With two months left before the U.S. presidential elections, state and federal officials are looking for more ways to address the risks of disinformation from AI and other sources.
Last week, the California Assembly approved legislation to improve transparency and accountability with new rules for AI-generated content, including access to detection tools and new disclosure requirements. If signed, the California AI Transparency Act, wouldn’t go into effect until 2026, but it’s the latest in a range of efforts by various states to begin addressing the risks of AI-generated content creation and distribution.
“It is crucial that consumers have the right to know if a product has been generated by AI,” California state senator Josh Becker, the bill’s sponsor, said in a statement. “In my discussions with experts, it became increasingly clear that the ability to distribute high-quality content made by generative AI creates concerns about its potential misuse. AI-generated images, audio and video could be used for spreading political misinformation and creating deepfakes.”
Continue reading this article on digiday.com. Sign up for Digiday newsletters to get the latest on media, marketing and the future of TV.