Skip to main content

AI-enabled Cyber Crime: 
The Next Frontier?

After a breakout year in 2023, artificial intelligence (AI) technology is now being mobilised by cyber criminal groups to automate and enhance their activity, work faster, and be more daring in their attack tactics than ever before.

  • On Monday 29th January 2024, a local clerk at an engineering firm reported to the Hong Kong authorities that they had attended a video conference call and been duped into paying HK$200m (GB£20m) of their company’s money to a fraudster.3 Soon after the investigation was started by the Hong Kong Police Force, senior superintendent Baron Chan realised that this business had suffered one of the boldest thefts using AI to date.

    Within a week, Chan sent a warning to the world, notifying media that this was a deepfake attack on the business.4 He said: “Because the people in the video conference looked like the real people, the informant made 15 transactions as instructed… I believe the fraudster downloaded videos in advance and then used AI to add fake voices to use in the video conference.”

  • It was later revealed that this employee worked for British multi-national company Arup which has 18,500 employees across 40 countries. While this occurrence at Arup has now become known as the ‘Zoom of Doom’, the firm is not alone in facing the threat of deepfake dupes. The world’s largest advertising firm WPP was exposed to a deepfake attack when CEO Mark Read had his voice cloned by scammers who used a fake WhatsApp profile to try and trick colleagues into sharing money and personal details.5 In April, similarly, a LastPass employer thwarted a cyber-attack by criminals using deepfake audio to impersonate its CEO Karim Toubba.6

    While AI has generated excitement, these examples highlight how this technology is now being weaponised by organised cyber criminals to trick employees and steal millions from vulnerable companies. No longer can employees believe what they see and hear, and more sophisticated authentication techniques are needed to prevent these kinds of attacks from occurring. 

  • “The issue of authenticity will be a challenge for business leaders and courts in the years ahead. How do you verify information and data like voices and pictures in a world where AI easily deceives people? These deepfake issues will manifest in hacking, transfer of money, and fake news which will proliferate in the coming years.” 

    Melissa Collins
    Claims Focus Group Leader, Cyber and Technology, Third-Party Claims 

3- https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam
4- https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video
5- https://www.ft.com/
6- https://www.scmagazine.com/news/lastpass-thwarts-attempt-to-deceive-employee-with-deepfake-audio

The information set forth in this document is intended as general risk management information. It is made available with the understanding that Beazley does not render legal services or advice. It should not be construed or relied upon as legal advice and is not intended as a substitute for consultation with counsel. Although reasonable care has been taken in preparing the information set forth in this document, Beazley accepts no responsibility for any errors it may contain or for any losses allegedly attributable to this information. BZ CBR 119.