CEO worries about failed deepfake attack
getty
When it comes to hacking attempts, the use of deepfakes generated by artificial intelligence, whether video, audio, or both, is on the rise. The AI technology needed to power such targeted phishing campaigns, and the resources and costs required to deploy them, have evolved so much that cybercriminals have, until recently, moved beyond the realm of state-sponsored threat groups. Now I can justify the use of what was there. But no matter how much effort you put into creating a deepfake phishing attack, no matter how sophisticated the AI you use to generate the deepfake bait, sometimes it only takes the most unexpected development for the facade to crumble. there is. Such is the case with the CEO of $10 billion startup Wiz.
Forbes 10 Second Hacker Attack on New Gmail Security Warning Written by Davey Winder
When AI attacks a cybersecurity startup, you might expect things to go wrong, but they didn’t
A cybersecurity consultant recently described how he was targeted by a sophisticated and convincing AI-generated phishing threat in an article published in Forbes magazine that went viral. In this case, Microsoft solutions consultant Sam Mitrovic almost got fooled until he discovered some flaws in a sophisticated and reliable exploit involving deepfake Google support calls.
In the latest example of this, revealed during a discussion at the TechCrunch Disrupt event in San Francisco, Assaf Rappaport, co-founder and CEO of Wiz, said attackers targeted his company’s employees with deepfake versions. After that, we explained how the attackers made mistakes and lost their successes. His own.
One mistake is fear of public speaking.
How anxiety ruined AI deepfake cyberattacks
Rapaport said the attempted AI deepfake phishing attack occurred two weeks ago, when dozens of Wiz employees received voice messages impersonating him. As with most deepfake campaigns of this type, the ultimate goal of this attack is to obtain the credentials of at least one employee, allowing the attacker to infiltrate the targeted network.
ForbesGoogle launches new free tools to protect AI—Introducing SAIF Davey Winder
In Scooby-Doo style, they might have gotten away without that pesky employee. The mistakes advanced attackers made were mistakes they had no idea about. Rappaport had anxiety about public speaking. This tells us the first of three reasons why the attack failed.
The attackers created an AI deepfake using a recording of a previous conference where Rapaport was speaking. The second mistake was not knowing that Rappaport’s anxiety disorder causes his voice to change when speaking in public. The third mistake was targeting a cybersecurity company that is always on the lookout for things like voicemails from CEOs trying to get employees to give up their credentials in some way, especially if everyone knows. This was especially true for non-CEOs. .
The moral of the story is that you should always sweat the small stuff when your boss suddenly asks you to click a link, run a file, or do something different than usual. Unfortunately, although Wiz was able to track the source of the audio, he was unable to determine who carried out the attack, so unfortunately the attacker was never caught. “The risk of getting caught is very low,” Rappaport said, which is why AI phishing attacks like this are so valuable to cybercriminals.