Regulating AI-generated Voices: The FCC’s Stand Against Misleading Robocalls

The Federal Communications Commission (FCC) recently passed a regulation that outlaws AI-generated voices in robocalls. This unanimous ruling delivers a strong message against exploiting AI technology to scam individuals or mislead voters.

The regulation targets robocalls made with AI voice-cloning tools under the Telephone Consumer Protection Act, a law established in 1991 that restricts junk calls utilizing artificial and prerecorded voice messages. This move by the FCC comes as part of an ongoing investigation into AI-generated robocalls that mimicked President Joe Biden’s voice to discourage people from voting in the New Hampshire primary last month.

Implications of the New Regulation

The new regulation immediately empowers the FCC to fine companies that use AI voices in their calls or block the service providers that carry them. It also opens the door for call recipients to file lawsuits and provides state attorneys general with a new mechanism to crack down on violators. FCC chairwoman Jessica Rosenworcel highlighted the urgency of this regulation, stating, “All of us could be on the receiving end of these faked calls, so that’s why we felt the time to act was now.”

Enforcement and Fines

The new ruling categorizes AI-generated voices in robocalls as “artificial” and thus makes them enforceable by the same standards. Transgressors can face steep fines, with a maximum of over $23,000 per call. The FCC has previously wielded the consumer law to clamp down on robocallers interfering in elections, including imposing a $5 million fine on two conservative hoaxers for falsely warning people in predominantly Black areas that voting by mail could heighten their risk of arrest, debt collection, and forced vaccination. Furthermore, the law grants call recipients the right to take legal action and potentially recover up to $1,500 in damages for each unwanted call.

The Ongoing Battle Against Misinformation

Despite these regulations, Josh Lawson, director of AI and democracy at the Aspen Institute, advises voters to brace themselves for personalized spam targeting via phone, text, and social media. He comments, “We must understand that bad actors will continue to rattle the cages and push the limits.” Kathleen Carley, a Carnegie Mellon professor specializing in computational disinformation, emphasized the need for technology to identify AI-generated audio, stating, “That is possible now because the technology for generating these calls has existed for a while. It’s well understood, and it makes standard mistakes. But that technology will get better.”

The Broader Picture: AI in Elections

Sophisticated generative AI tools, including voice-cloning software and voice deepfakes, as well as image generators, are already being used in elections in the U.S. and worldwide. There have been instances of campaign advertisements using AI-generated audio or imagery, and some candidates even experimented with using AI chatbots to communicate with voters. While bipartisan efforts in Congress have sought to regulate AI in political campaigns, no federal legislation has passed, leaving the general election vulnerable to AI misuse.

Share the Article by the Short Url: