Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Would you spot a deepfake? Research suggests not all Britons are sure

August 2024 by CyberArk

With the rapid advancement of artificial intelligence (AI) capabilities, AI-powered threats are evolving at an unprecedented pace. Security teams are particularly wary, with 93% of cybersecurity professionals expecting AI-powered tools to negatively impact their organisation within the next 12 months.

• Three quarters of UK security leaders are confident employees can identify deepfakes
• But with over a third of workers admitting they would struggle to identify a fake phone call from their boss, doubts clearly linger
• 46% are also apprehensive about potential malicious use of their likeness by cyberattackers

High-profile cases of malicious GenAI deepfake use are now becoming an everyday occurrence. Take UK engineering firm Arup which fell victim to a £20m scam, where deepfake generated audio of senior officers at the company were used to trick an employee into transferring funds to cybercriminals. If not addressed, there’s potential for this technology to prove immensely damaging, but business leaders are yet to recognise the extent of the danger looming on the horizon.

CyberArk’s recent Threat Landscape report reveals that three quarters (75%) of security leaders feel confident in their employees’ ability to identify deepfakes of company’s leadership team – whether video, audio or otherwise. Parallel research of UK office workers, however, shows that not all employees share that confidence. A number of workers have fears around the growing use of generative AI for malicious purposes, revealing a confidence gap between security leadership and the wider organisation.

CyberArk’s study of UK workers revealed that:

• Alarmingly, more than 1 in 3 (34%) of UK employees say they would struggle to differentiate between a real or fake phone call or email from their boss, suggesting that confidence from security leaders is misplaced.
• Almost half (46%) of UK workers are apprehensive about their likeness being exploited in deepfakes.
• These anxieties far exceed employees’ concerns about AI replacing their roles, with only 37% fearing that possibility.

The research highlights that while the majority of UK office workers are confident in their ability to spot a deepfake, the sizeable chunk that might be exploited represent a huge potential security weakness. Clearly, there is an urgent need for employee education and vigilance, as well as for proactive tools to combat the escalating risks posed by AI-powered attack and in particular the identity security threat that deepfakes pose.

“The increasing sophistication of AI-generated deepfakes is blurring the lines between reality and deception, adding a new layer of complexity to identity-based attacks. Businesses need to realise this technology poses a legitimate threat and that - without the right protection - sooner or later these attacks are going to succeed,” says Rich Turner, President, EMEA at CyberArk.

“If aspects of employees’ digital identity are stolen or faked, the potential consequences could be extremely damaging. Ultimately, employees are your first and most vital line of defence and, once that line is breached, it leaves your business at great risk. Guarding sensitive access is all-important. That way, even if attackers can use a deepfake to steal a credential and get a foothold in the business, they can’t easily get to sensitive data without detection, limiting the damage the attack may cause. Businesses should also promote a culture where it’s OK to question or challenge things that don’t look quite right.”


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts