Artificial Intelligence: more foe than friend for UK cybersecurity?
December 2023 by CyberArk
According to CyberArk’s 2023 Identity Security Threat Landscape Report almost 9 in 10 UK cybersecurity teams (88%) are embracing Artificial Intelligence (AI), with many already making use of the technology to triage minor threats. Generative AI (GenAI) in particular is already being used to identify behavioural anomalies faster and improve cyber resilience, giving teams more time to upskill on evolving threats or evolve defences to combat increasingly innovative cyberattacks. While human talent remains critical for combatting emerging threats, AI can also help bridge some of the gaps caused by the 3.4-million-person cybersecurity worker shortage.
However, increasingly popular GenAI tools are also opening up a whole new pandora’s box of security vulnerabilities and causing concerns amongst security professionals. CyberArk’s research indicates that 87% expect AI-enabled threats to adversely affect their organisation in the next year. Chatbot security is a significant worry. The top concern for 28% of UK respondents is that generative technologies will give cyberattackers the means to exploit vulnerabilities and inject malware, impersonate employees through deepfakes and conduct phishing campaigns.
Malicious actors are already using GenAI to create legitimate-sounding email copy for phishing campaigns or even generate malware that bypasses facial recognition authentication or evades detection. Such techniques were revealed in CyberArk research earlier this year, which revealed that attackers could use ChatGPT to generate malicious code and create polymorphic malware that is highly evasive to most anti-malware products.
“Cybersecurity teams have to tread extremely carefully in their dealings with AI. Balancing the undoubted benefits it brings with the sizeable risks it creates is no simple task”, says David Higgins, senior director, Field Technology Office at CyberArk, noting that the use of AI creates an explosion of machine identities that malicious actors can exploit to get access to confidential data and controls.
“Establishing AI-specific company guidelines, publishing usage policies and updating employee cybersecurity training curricula is a must. Due diligence is needed before any AI-led tools are introduced, as that’s the most effective way to mitigate risk and reduce vulnerabilities. Without appropriate identity security controls and malware-agnostic defences, it quickly becomes difficult to contain innovative threats in high volumes that can compromise credentials en route to accessing sensitive data and assets”.