McAfee unveils Project Mockingbird to stop AI voice clone scams
08.01.2024 - 06:21
/ venturebeat.com
/ Ai
Do you want to get the latest gaming industry news straight to your inbox? Sign up for our daily and weekly newsletters here .
McAfee has introduced Project Mockingbird as a way to detect AI-generated deepfakes that use audio to scam consumers with fake news and other schemes.
In a bid to combat the escalating threat posed by AI-generated scams, McAfee created its AI-powered Deepfake Audio Detection technology, dubbed Project Mockingbird.
Unveiled at CES 2024, the big tech trade show in Las Vegas, this innovative technology aims to shield consumers from cybercriminals wielding manipulated, AI-generated audio to perpetrate scams and manipulate public perception.
In these scams, such as with the video attached, scammers will start a video with an legit speaker such as a well-known newscaster. But then it will take fake material and have the speaker utter words that the human speaker never actually said. It’s deepfake, with both audio and video, said Steve Grobman, CTO of McAfee, in an interview with VentureBeat.
The AI Impact Tour
Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.
“McAfee has been all about protecting consumers from the threats that impact their digital lives. We’ve done that forever, traditionally, around detecting malware and preventing people from going to dangerous websites,” Grobman said. “Clearly, with generative AI, we’re starting to see a very rapid pivot to cybercriminals, bad actors, using generative AI to build a wide range of scams.”
He added, “As we move forward into the election cycle, we fully expect there to be use of generative AI in a number of forms for disinformation, as well as legitimate political campaign content generation. So, because of that, over the last couple of years, McAfee has really increased our investment in how we make sure that we have the right technology that will be able to go into our various products and backend technologies that can detect these capabilities that will then be able to be used by our customers to make more informed decisions on whether a video is authentic, whether it’s something they want to trust, whether it’s something that they need to be more cautious around.”
If used in conjunction with other hacked material, the deepfakes could easily fool people. For instance, Insomniac Games, the maker of Spider-Man 2, was hacked and had its private data put out onto the web. Among the so-called legit material could be deepfake content that would be hard to discern from the real hacked material from the victim company.
“What what we’re going to be announcing at CES is really our first public sets of demonstrations of some of our newer technologies that we built,” Grobman said. “We’re working across all domains.