skip to main content
4.7/5
Customers rate us on G2
See our reviews on G2.

Deepfakes: The Next Frontier in Digital Deception?

CategoryInsights
John Scott, Lead Cyber Security Researcher
ByJohn Scott
Date
Read time

Machine learning (ML) and AI tools raise concerns over mis- and disinformation. These technologies can ‘hallucinate’ or create text and images that seem convincing but may be completely detached from reality. This may cause people to unknowingly share misinformation about events that never occurred, fundamentally altering the landscape of online trust. Worse – these systems can be weaponised by cyber criminals and other bad actors to share disinformation, using deepfakes to deceive.

Deepfakes – the ability to mimic someone using voice or audio and to make them appear to say what you want – are a growing threat in cybersecurity. Today's widespread availability of advanced technology and accessible AI allows virtually anyone to produce highly realistic fake content.


Many businesses care deeply about ransomware defences, but the FBI estimate that the Business Email Compromise / CEO Fraud attacks cost businesses worldwide three times as much as ransomware attacks do. And deepfake technologies only make these attacks more plausible, and therefore more effective. 

A notable example from earlier this year involves a finance worker tricked into paying out $25 million to fraudsters who used deepfake technology to impersonate the company’s CFO during a video conference call in Hong Kong.

As deepfake technology becomes more advanced, it becomes harder to spot the fakes. Microsoft’s recent announcement of VASA-1 shows how this tech can help boost educational equity, but it also makes it easier for scammers to create deepfakes for dishonest purposes. That’s why companies must prioritise educating employees on the warning signs of deepfakes to avoid them falling victim to a scam, in much the same way they are educated about social engineering. Unfortunately, many companies are not yet doing this.

The law around deepfakes

The positive news is that legislation aimed at helping people deal with deepfakes is expected to come into effect soon. This summer, the EU has put forward the AI Act, the world’s first comprehensive AI law to regulate artificial intelligence. Likewise, in April 2021, the European Commission proposed the first EU regulatory framework for AI. It stated that AI systems that can be used in different applications are analysed and classified according to the risks they pose to users. Depending on risk levels, there will be more or less regulation.

The King’s Speech in July also discussed AI, in particular the need for the new government to "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” This includes labelling materials as ‘Produced by AI’, though there are already doubts about the usefulness of such labelling – cyber criminals are unlikely to use them, and even for legitimate use, there is currently little guidance on how much content in a piece would need to be generated by AI for the label to apply.

The US Government is also looking at similar measures, with a presidential Executive Order in November 2023 and a 2024 Act of Congress looking at labelling of content and other AI issues.

Don’t panic; be prepared for deepfakes

It's important to approach the hype surrounding deepfakes with caution. While deepfakes can aid in providing context for scams, it’s important to refrain from contributing to unnecessary panic. For instance, the influence of nation-state interference in elections, often associated with deepfakes, may be overstated. Quick, firm responses can mitigate the impact. Many aspects of this issue are manageable.

The rise of generative AI and deepfakes has undeniably diversified and evolved the threat landscape. While the volume and sophistication of the attacks may increase, the underlying principles of these attacks remain unchanged. This means that defensive strategies can also equally stay largely the same.

Warning signs of deepfakes

Detecting deepfake scams hinges upon scrutiny of the content, whether audio or visual. Slowing down and taking a moment to notice subtle irregularities is key. It’s almost always better to act safely rather than swiftly to reduce the chances of being rushed into making an error.

  • Talk the talk: Be vigilant about awkward pauses, odd pronunciations, or speech patterns that seem unnatural within projected audio—these may hint at AI manipulation.

  • Pressure point: Peculiar or inappropriate phrasing often signals AI involvement, along with any calls for immediate action or conveying undue urgency, which are hallmark tactics employed in scams to pressurise the recipient.

  • Movements: Visual cues include unsynchronised mouth movements, repetitive gestures, and incongruous facial expressions.

By familiarising themselves with these indicators, employees can better spot potential deepfake threats.

What to do when faced with a deepfake?

As an individual, when faced with any social engineering scam, whether through audio, video, email, or AI deepfakes:

  • Pause: Take a moment, don’t feel pressured to act quickly or if emotions are being manipulated.

  • Assess plausibility: Question the request's normalcy. Is it an unusual task, or a common one presented unusually?

  • Review procedures: Consider if the request violates an organisation's protocols. Understand why this action is needed.

  • Verify information: Seek confirmation from others or directly from the source. Avoid responding solely via the initial communication channel; opt for a call or direct message for validation. If a voice message is received, cross-verify through instant messaging.

If something doesn’t feel right – escalate it to your security or IT team.

As an organisation, the best thing that can be done is to establish robust procedures and strictly adhere to them. For example, giving an employee sole discretion to authorise a $25 million payment, without oversight is a recipe for financial loss. Implementing and following clear payment authorisation protocols, will significantly reduce the financial risk.

HUMAN RISK MANAGEMENT

More than a security alert: A guide to nudges

Through the strategic use of security Nudges, CultureAI not only helps in identifying risks that might have gone unnoticed but also significantly reduces the time needed to resolve incidents - from days to minutes, or even seconds.

INSIGHTS

Top Employee Security Risks You're Probably Not Measuring

Email is just one piece of the puzzle, which is why it is crucial to consider a wide range of employee security behaviours to get a holistic view of your risks. By doing so, you can focus resources more efficiently.