News

Microsoft Calls for Legislative Response to the Growing Deepfake Issue

In light of the growing use of AI-generated deepfakes during this election season, Microsoft is calling on lawmakers to curb their usage through legislation.

In a blog posted by Microsoft President Brad Smith on Tuesday, he argued that AI-generated deepfakes are growing in sophistication and accessibility, presenting risks of fraud, abuse, and manipulation, especially against vulnerable groups like children and seniors.

His comments come days after X social media platform owner Elon Musk shared with his followers a deepfake video of Vice President Kamala Harris making comments she never made. While Smith argues that legislation is needed to curb possible election interference like this, it also extends to all harmful uses of the technology.

"In addition to combating AI deepfakes in our elections, it is important for lawmakers and policymakers to take steps to expand our collective abilities to (1) promote content authenticity, (2) detect and respond to abusive deepfakes, and (3) give the public the tools to learn about synthetic AI harms," wrote Smith.

Many states have already enacted laws this year that require the disclosure of AI-generated content in political ads, and states like California, Washington, Texas and Michigan having laws on the books to regulate the overall use of deepfake content. California Governor Gavin Newsom has also signaled the strengthening of his state's deepfake regulation in light of this week's Musk incident. However, Microsoft said that legislatures must unite to do more.

In the post, Smith outlines three ways in which policymakers can craft the needed legislation:

  • Enact a federal "deepfake fraud statute" to provide law enforcement with a framework to prosecute AI-generated fraud and scams.
  • Require AI system providers to use advanced provenance tools to label synthetic content, enhancing public trust in digital information.
  • Update federal and state laws on child sexual exploitation and non-consensual intimate imagery to include AI-generated content, imposing penalties to curb the misuse of AI for sexual exploitation.

"Microsoft offers these recommendations to contribute to the much-needed dialogue on AI synthetic media harms," wrote Smith. "Enacting any of these proposals will fundamentally require a whole-of-society approach. While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action."

Smith also reiterated that the burden to curb the threat caused by deepfakes doesn't only lie with lawmakers, but with the private sector as well. Smith pointed to Microsoft's own track record, saying the company has taken steps to address the issue by including implementing a robust safety architecture, attaching metadata to AI-generated images, developing standards for content provenance and launching new detection tools like Azure Operator Call Protection.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

comments powered by Disqus

Subscribe on YouTube