911 dispatchers across Pennsylvania and across the country received dozens of AI-generated swatting calls on Wednesday, leading to shutdowns and panic as police determined all of the calls were hoaxes.
As artificial intelligence technology gets more and more advanced, the lines between reality and fiction become more blurred with every day. AI systems can generate all types of media now: OpenAI’s ChatGPT can write poems, Midjourney has generated photos of the Pope wearing a puffer jacket and photos of Donald Trump being arrested that have tricked people online, and YouTube channels like Schmoyoho have used Eleven Labs to make Barack Obama sing pop hits.
“The main purpose for these tools has been a sort of entertainment,” Dr. Rayid Ghani of Carnegie Mellon University’s Center for Data Science and Public Policy says. “People play with it, and you could argue that as an entertainment tool, it’s pretty reasonable. But it’s being turned from that to much more malicious uses.”
Also on Wednesday, OpenAI co-founder Elon Musk, Apple co-founder Steve Wozniak, former presidential candidate Andrew Yang, and over a thousand others signed an open letter calling for a 6-month freeze on the development of advanced AI systems, urging governments to step in if necessary until safety protocols and independent investigators are put in place.
“A lot of these systems are being trained on whatever is being generated on the Internet, we don’t really have enough controls on what’s being generated,” Ghani says. “But we can control how it’s released, what controls are there, how it’s tested. So I think those are all the things we need to do around these such technologies.”
Initiatives like Carnegie Mellon University’s Responsible AI program intend to ensure artificial intelligence technologies are being used for good, but for AI researchers, it’s a balancing act between the potential benefits and dangers.
“We still want to do the good things people can do,” Ghani says. “But the danger is the same technology when designed for horrible purposes can make things worse. It’s sort of this question of: are all the potential benefits worth all the demonstrated risk?”