We need to know how AI firms fight deepfakes
12.02.2024 - 13:09
When people fret about artificial intelligence, it's not just due to what they see in the future but what they remember from the past -- notably the toxic effects of social media. For years, misinformation and hate speech evaded Facebook and Twitter's policing systems and spread around the globe. Now deepfakes are infiltrating those same platforms, and while Facebook is still responsible for how bad stuff gets distributed, the AI companies making them have a clean-up role too. Unfortunately, just like the social media firms before them, they're carrying out that work behind closed doors.
I reached out to a dozen generative AI firms whose tools could generate photorealistic images, videos, text and voices, to ask how they made sure that their users complied with their rules.(1) Ten replied, all confirming that they used software to monitor what their users churned out, and most said they had humans checking those systems too. Hardly any agreed to reveal how many humans were tasked with overseeing those systems.
And why should they? Unlike other industries like pharmaceuticals, autos and food, AI companies have no regulatory obligation to divulge the details of their safety practices. They, like social media firms, can be as mysterious about that work as they want, and that will likely remain the case for years to come. Europe's upcoming AI Act has touted “transparency requirements,” but it's unclear if it will force AI firms to have their safety practices audited in the same way that car manufacturers and foodmakers do.
For those other industries, it took decades to adopt strict safety standards. But the world can't afford for AI tools to have free rein for that long when they're evolving so rapidly. Midjourney recently updated its software to generate images that were so photorealistic they could show the skin pores and fine lines of politicians. At the start of a huge election year when close to half the world will go the polls, a gaping, regulatory vacuum means AI-generated content could have a devastating impact on democracy, women's rights, the creative arts and more.
Here are some ways to address the problem. One is to push AI companies to be more transparent about their safety practices, which starts with asking questions. When I reached out to OpenAI, Microsoft, Midjourney and others, I made the questions simple: how do you enforce your rules using software and humans, and how many humans do that work?
Most were willing to share several paragraphs of detail about their processes for preventing misuse (albeit in vague public-relations speak). OpenAI for instance, had two teams of people helping to retrain their AI models to make them safer or react to harmful outputs. The company behind controversial