Increasing Phishing Threat of DeepFake
Many things don’t look the same as they seem. Individuals have been using artificial intelligence (AI) to create distortions in reality as AI technology has improved. They have created fake images (DeepFake) and videos for everyone, including President Obama and Mark Zuckerberg. Many of these applications are harmless, but others, like deepfakephishing, can be far more dangerous.
Many threat actors use AI to produce synthetic audio, image, and video content. These videos are intended to impersonate CEOs or other executives to trick employees into providing information.
Most organizations need more time to be ready to deal with these kinds of threats. Darin Stewart, a Gartner analyst, posted a blog in 2021 warning that organizations are working hard to defend themselves against ransomware attacks and failing to plan for the upcoming onslaught with synthetic media.
AI is quickly developing and OpenAI, which provides machine learning and AI via chatGPT, makes it impossible for organizations to ignore the danger of social engineering posed by deep fakes. If they do not, they are vulnerable to data breaches.
Future of deep fake phishing in 2022 & beyond
Although deepfake is still in its infancy, it’s rapidly growing in popularity. Cybercriminals already use it for attacks on innocent users and organizations.
According to the World Economic Forum (WEF), there is 900% more deep fake video online each year. VMware reports that VMware found that two-thirds of defenders reported seeing malicious deepfakes used in an attack. This is a 13% increase over last year.
These attacks can prove to be very devastating. Cybercriminals used AI voice-cloning in 2021 to impersonate the CEO of a large corporation. They tricked its bank manager into transferring $35 Million to another account to “acquire” the company.
In 2019, a similar incident took place. A fraudster called a UK CEO of an energy company, using AI to impersonate the chief executives of its German parent firm. He asked for a transfer of $243,000 to a Hungarian supplier. Analysts believe that the rise in deepfakephishing will continue and that the fake content created by threat actors will only get more sophisticated.
Akhilesh Tutja, KPMG analyst, said that “deep fake technology will mature” and that attacks using deepfakes are likely to become more common. “They are becoming more difficult to tell apart from reality.” Two years ago, it was easy for people to recognize fake videos because they were incredibly clunky and the person who made them didn’t blink. Tuteja explained that it is becoming more difficult to discern fake videos now.
Tuteja suggested that security officers should prepare for fraudsters using fake images and video to bypass authentication systems, such as biometric logins.
Deepfakes can imitate people and may bypass biometric authentication.
Deepfake phishing attacks use AI and machine learning to process various content, including audio and video clips. This data is used to create digital copies of individuals. Sectigo CSO and CISO Advisor David Mahdi stated that “bad actors” can easily develop autoencoders (a type of advanced neural network — to watch videos and study images. They can also listen to recordings of individuals and mimic their physical attributes.
This approach was demonstrated in a great example earlier this year. The hackers used content from previous interviews and media appearances to create a deepfake Hologram of Patrick Hillmann, chief communications officer at Binance. This method allows threat actors to imitate the physical attributes of an individual and fool users using social engineering. They can also use biometric authentication methods.
Read this also: Google Chrome Update Rolls Out Battery And Memory
Avivah Litan, Gartner Analyst, suggests that organizations “never rely upon biometric certification to user authentication applications unless it uses efficient deepfake detection which assures user lifeless and legitimacy.”
Litan also points out that detection of these types is likely to get more difficult as the AI they use improves to create compelling audio and visually representations.
Litan explained that deepfake detection could be a losing proposition as the deepfakes made by the generative networks are evaluated by a network of discriminators. Litan explained that the generator is designed to trick the discriminator and create artificial content. Cybercriminals can use insights from the discriminator to make harder-to-detect content.
The importance of security awareness training
Deepfake phishing can be simply addressed by organizations: security awareness training. Although training won’t prevent every employee from being hacked, it can help to reduce the chances of security incidents or breaches.
Deepfake phishing can be addressed by integrating it into security awareness training. John Oltsik from ESG Global said that users are trained to avoid clicking on links. They should also be given similar training regarding deepfakephishing.
It is important that part of this training includes a procedure to report phishing attempts on the security team.
According to the FBI, training content should be able to teach users how to spot fake spearphishing and social engineer attacks. Users can do this by looking for visual indicators such distortions, warping and inconsistencies within images and videos.
Helping users identify red flags like multiple images that have consistent eye spacing and placement or problems with audio syncing can prevent them from falling for a skilled attacker.
AI is a tool to fight adversarial AI
AI can also be used by organizations to combat deepfakephishing. Generative adversarial Networks (GANs), an example of deep learning, can generate synthetic datasets that can be used to create fake social engineering attacks.
“A strong CISO can rely on AI tools, such as fake detection, to detect them. Organizations can also use GANs to generate cyberattacks that criminals are not yet using, and then devise strategies to prevent them from happening,” Liz Grennan is an expert associate partner at McKinsey.
Organizations that choose these routes must be ready to invest the time, as cybercriminals may also use them to create new attack types.
Grennan explained that GANs can be used by criminals to launch new attacks. It’s up to businesses to keep one step ahead.
Businesses must be prepared. Companies that fail to take deepfakephishing as a threat seriously could be vulnerable to an emerging threat vector. AI will become more accessible and more easily accessible, making it even more attractive to malicious entities.