2024 Cybersecurity Predictions: Insights From Industry Experts
December 2023 by Experts
As 2024 approaches, cybersecurity experts are beginning to weigh in on how the threat landscape is set to evolve in the new year. Generative AI, ransomware, OT and phishing attacks are among the top trends anticipated to dominate the conversation in 2024. Below, experts from across the space share some of their thoughts on the most imminent threats and risks organizations should be on the lookout for over the next 12 months.
Max Heinemeyer, Chief Product Officer, Darktrace
AI will be further adopted by cyber attackers and might see the first AI worm
2023 has been the year where attackers test things like WormGPT and FraudGPT, and adopt AI in their attack methodologies. 2024 will show how more advanced actors like APTs, nation-state attackers and advanced ransomware gangs have started to adopt AI. The effect will be even faster, more scalable, more personalized & contextualized attacks with a reduced dwell time.
It could also be the year of attackers combining traditional worming ransomware - like WannaCry or notPetya - with more advanced, AI-driven automation to create an aggressive autonomous agent that has sophisticated, context-based decision-making capabilities.
Agnidipta Sarkar, VP CISO Advisory, ColorTokens
Realization about Digital Resilience
Early adopters of digital transformation will begin to see the fruits of their vision and execution as some digital transformation projects. However, this will also begin the realization that many enterprises have not planned for digital resilience, and as enterprises begin moving into digital-only business models, they will seek retrospective attempts to build digital resilience. Many enterprises are already there and the emergence of disruption in digital business-as-usual will help enterprises realize the value of digital resilience. This will result in the evolution of a new market in Digital Resilience to build digital immunity at an enterprise scale.
The emergence of “poly-crisis” due to pervasive AI-based cyber-attacks
We saw the emergence of AI in 2022, and we saw the emergence of misuse of AI as an attack vector, helping make phishing attempts sharper and more effective. In 2024, I expect cyberattacks to become pervasive as enterprises transform. It is possible today to entice AI enthusiasts to fall prey to AI prompt injection. Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a “poly-crisis” that threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical Computing Infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models.
Microsegmentation will be a foundational element of cyber defense
With the increase in digital business-as-usual, cybersecurity practitioners are already feeling lost in a deluge of inaccurate information from mushrooming multiple cybersecurity solutions coupled with a lack of cybersecurity architecture and design practices, resulting in porous cyber defenses. In 2024, business leaders will realize that investments in microsegmentation will force the IT and security teams to begin developing digital business context-based cybersecurity architecture and design because microsegmentation is the last line of defense during a cyber-attack. Security and Risk leaders will leverage the pan-optic visualization capability of microsegmentation to build immediate cyber defenses to protect digital business as usual, even during severe cyber-attacks.
ICS/OT Cybersecurity needs will use AI innovation to solve mundane operational problems.
The increased need for distributed Business Decisions by connecting IT and OT will force AI-based solutions to address human safety, operational reliability and highly efficient ICS/OT cybersecurity solutions that can solve mundane issues like patch and vulnerability management and OT access management. Enterprises will begin to see the loss of ICS/OT data impacting business outcomes and, therefore, will begin investing in ways to regulate the flow of ICS/OT using AI tools. ICS/OT micro segmentation will bring in unparalleled visualization to augment cybersecurity practices, especially to regulate the use of Active Directory within ICS/OT.
Rajesh Khazankchi, CEO & Co-founder, ColorTokens
Ransomware attacks will continue to evolve in sophistication, with attackers targeting high-value assets. Organizations must enhance their defenses and incident response capabilities.
AI and ML-Powered Threats and Defenses
Both cyber attackers and defenders will increasingly rely on artificial intelligence and machine learning. Attackers may use AI to automate attacks, while organizations will use it for more effective threat detection and response.
OT/IoT Security Challenges
As the number of Internet of Things (IoT) devices grows, securing these devices, including OT, will remain a significant concern, with the need for robust security measures and vulnerability management.
Cloud Security Focus
With widespread cloud adoption, ensuring the security of cloud environments will be paramount. Organizations must implement strong cloud security strategies and configurations to protect their data and applications.
Zero Trust Security Adoption
The Zero Trust security model, which assumes zero trust even within an organization, will gain momentum. Organizations will prioritize identity and access management, along with least-privilege access controls, to enhance overall security.
Paul Baird, Chief Field Technical Officer, Qualys
CISOs will go from consolidation to simplification around security
CISOs will prioritize simplifying their security stack in 2024. Companies implement around 70 – 90 security tools on average, and these huge numbers call for CISOs to make their operations more effective and efficient. Rather than simply consolidating the number of security tools being implemented, CISOs will focus on simplifying their processes and making security easier across the board. Concentrating on ease of use and ‘one click to rule them all’ approaches will be the key objective for teams.
Looking at this in action, prioritized automation will be used more frequently to help security operations teams focus on the largest threats to their organizations, based on the most pressing issues and present the biggest chance of being exploited. We’ll begin to see remediation becoming more automated, freeing up skilled people who can better spend their time focusing on efforts that will make a difference.
Skill issues will force more hands around AI deployments
With AI’s acceleration in a variety of industries, we’ve seen panic around AI replacing humans. While AI does have the potential to take on low-level tasks that security teams usually handle manually, these deployments are there to augment security teams. AI will revitalize teams and increase productivity. For more entry level employees, automation will support their onboarding journey, allowing them to make a tangible difference in security operations faster. In fact, taking more menial tasks off the hands of security teams should prove beneficial for their mental health, with burnout being a constant issue in the IT/security industry. Not to mention, the economy will only exacerbate organizational issues such as quiet quitting and burnout. In 2024, security leaders will need to pay closer attention to the health and well-being of their team members, in addition to managing the business and risk. AI will help with risk management and security maintenance, but it won’t be able to have meaningful conversations with team members about how they’re feeling.
Gartner predicts that lack of talent will be responsible for more than half of significant cyber incidents by 2025. Supporting teams in being more effective will be a critical goal for IT leadership in 2024 to prevent that prediction from coming to fruition. AI will allow security teams to feel empowered to make a lasting impact within their roles, rather than replace them.
Education and soft skills will get more focus
Insider threats are a leading problem for IT/security teams – many attacks stem from internal stakeholders stealing and/or exploiting sensitive data, which succeed because they use accepted services to do so. In 2024, IT leaders will need to help teams understand their responsibilities and how they can prevent credential and data exploitation.
On the developer side, management will need to assess their identity management strategies to secure credentials from theft, either from a code repository hosted publicly or within internal applications and systems that have those credentials coded in. On the other hand, end users need to understand how to protect themselves from common targeted methods of attack, such as business email compromise, social engineering, and phishing attacks.
Security teams need to prioritize collaboration with other departments within their organization to make internal security training more effective and impactful. Rather than requiring training emails/videos to be completed with little to no attention to their contents, security executives need to better understand how people outside of their department think and operate. Using techniques like humor, memorable tropes and simple examples will all help to solve the problem
around insufficient and ineffective security training – creating a better line of defense against insider threats.
Jonathan Trull, Chief Security Officer, Qualys
CISOs are increasingly under pressure to quantify cyber risk in financial terms to C-suite and boardroom
De-risking the business and reducing cyber risk has become a central focus of executive stakeholders, from the CEO to the board of directors. CISOs find themselves in a challenging position – under immense pressure to address critical issues, while working with budget constraints that are tighter than ever. They are tasked with doing more with less. CISOs are being pushed more into the conversation of the financial impact of cyber risk. They need to be able to measure cyber risk in terms of financial risk to the business, communicate that effectively to the C-suite and boardroom, and eliminate the most significant risks expediently. The CISOs that succeed in these areas will be the ones that last in their roles.
Dan Benjamin, CEO and Co-Founder, Dig Security
Security programs for generative AI
• As companies begin to move generative AI projects from experimental pilot to production, concerns about data security become paramount.
o LLMs that are trained on sensitive data can be manipulated to expose that data through prompt injections attacks
o LLMs with access to sensitive data pose compliance, security, and governance risks
• The effort around securing LLMs in production will require more organizational focus on data discovery and classification - in order to create transparency into the data that ‘feeds’ the language model
Consolidation of data security tooling
• As organizations moved to the cloud, their infrastructure has become increasingly fragmented. With multi-cloud and containerization becoming de-facto standards, this trend has intensified. Data storage and processing is dispersed, constantly changing, and handled by multiple vendors and dozens of tools.
• To secure data, businesses found themselves investing in a broad range of tooling - including DLP for legacy systems; CSP-native solutions; compliance tools; and more. In many cases two separate tools with similar functionality are required due to incompatibility with a specific CSP or data store.
• This trend is now reversing. Economic pressures and a growing consensus that licensing and management overhead have become untenable are leading organizations toward renewed consolidation. Businesses are now looking for a single pane of glass to provide unified policy and risk management across multi-cloud, hybrid, and on-premises environments. Security solutions are evolving accordingly - moving from point solutions that protect a specific data store toward more comprehensive platforms that protect the data itself, wherever it’s stored.
Maturation of compliance programs
• Organizations are realizing that compliance needs to be more than an annual box-ticking exercise. With regulators increasingly willing to confront companies over their use and protection of customer data, it’s become clear that compliance needs to be a strategic priority.
• Businesses will invest more in programs that enable them to map their existing data assets to compliance requirements, as well as tools that help identify compliance violations in real time - rather than waiting for them to be discovered during an audit (or in the aftermath of a breach).
Kern Smith, VP Americas, Sales Engineering, Zimperium
The Rise of QR Code Phishing
QR Code Phishing or “quishing” is becoming a very popular form of attack by cybercriminals. As the use of QR codes for everyday things such as reading a restaurant menu or paying for a parking spot continues to increase, bad actors will also continue to take advantage of this opportunity and the vulnerabilities of this mobile technology to launch attacks. This type of attack currently bypasses traditional web and email gateway controls, allowing attackers to easily embed a malicious URL containing custom malware into a QR code that could then exfiltrate data from a mobile device when scanned.
What’s more, is that quishing is explicitly targeting mobile devices. Mobile devices are the primary device that has the ability to render these links. Attackers are targeting mobile and using corporate communications to distribute these targeted attacks, mainly because most organizations have no defenses against targeted mobile attacks.
Apple officially supporting Third party app stores next year in EMEA
Apple iOS will have to officially support third party app stores in EMEA starting next year, bringing a new threat surface that organizations will need to consider. While the details of how Apple will support this requirement are still unknown, it is something that organizations will need to monitor, especially considering the vast majority of malware seen on mobile devices comes from third party app stores on both Android and iOS.
Evolving Regulatory Requirements
Regulatory requirements are constantly evolving when it comes to cybersecurity technology, and this will only continue to happen in 2024. For example in APAC there have been new/updated regulatory requirements requiring mobile banking applications to embed more robust protections against runtime attacks and fraud, and I expect other regions to learn and evolve their regulatory requirements for mobile apps as well. I also think Mobile Threat Detection (MTD) and Mobile App Vetting (MAV) will become more broadly required and standardized across all verticals, as best practices and requirements are updated to reflect the current landscape for mobile. We are already starting to see this take place. Two of the most recent examples are from Cybersecurity Maturity Model Certification (CMMC) and the National Institute of Standards and Technology (NIST), both calling out the need and requirement for MTD and (MAV) as being essential components of an enterprise or agencies mobile device security strategy regardless of their vertical, other controls in place, or general security posture.
JT Keating, SVP of Corporate Development, Zimperium
Rise of Mobile Ransomware
Another threat to beware of in 2024 is mobile ransomware. Mobile ransomware is a form of malware that affects mobile devices. A cybercriminal can use mobile malware to steal sensitive data from a smartphone or lock a device, before demanding payment to return the data to the user or unlock the device. Sometimes people are tricked into accidentally downloading mobile ransomware through social networking schemes, because they think they are downloading innocent content or critical software.
According to Zimperium’s Global Mobile Threat Report, last year was the beginning of real mobile ransomware, with a 51% increase in the total number of unique mobile malware samples detected year-over-year. It is reasonable to expect that to continue.
The growing adoption of application shielding as part of a DevSecOps framework
Application shielding will continue to grow in adoption as organizations realize its value in the DevSecOps framework. Application shielding helps DevSecOps teams work more efficiently by embedding protections to secure source code and IP from reverse-engineering and tampering attempts, including Code tampering, malware injection, encryption key extraction and reverse engineering. IT and security teams will need a mobile app protection platform that meshes with a DevSecOps framework or risk being further siloed from development team efforts.
Patrick Harr, CEO, SlashNext
Beware the Weaponization of Generative Artificial Intelligence in 2024
The top threat this year and going forward involves the weaponization of generative AI to drive more sophisticated phishing attacks, and how we will address that concern from a security standpoint. We know that human training is not enough to prevent business email compromise (BEC) attacks from succeeding. According to the FBI’s Internet Crime Report, BEC alone accounted for approximately $2.7B in losses in 2022, and another $52M in losses from other types of phishing. With rewards like this, cybercriminals are increasingly doubling down on phishing and BEC attempts – and generative AI is only further greasing the wheels.
In 2024 we will see more, not less, of such human compromise attacks that are a lot more sophisticated and targeted due to the use of gen AI. We will need to rethink our roadmaps as to how we can counter this problem. We should expect an acceleration of gen AI-based attacks becoming more prevalent and targeted, and unfortunately more successful. The attackers are moving from a spray-and-pray approach that relied on high-volume phishing emails, to now instead targeting people with specific information about someone’s identity or bank account or personal details, which makes the scams much more convincing.
We will see a significant increase in both the targeted nature of these social engineering attacks and their sophistication, and ultimately their success. Email will continue to be the top threat vector, but we are seeing these attacks anywhere now, including text messages, voice messages, work collaboration tools like Slack and social media. Anywhere you can get messaged on both the personal and business side, you can get attacked.
Highly Targeted Attacks Created with Gen AI and Personal Information
Phishing and BEC attacks are becoming more sophisticated because attackers are using personal information pulled from the Dark Web (stolen financial information, social security numbers, addresses, etc.), LinkedIn and other internet sources to create targeted personal profiles that are highly detailed and convincing. They also use trusted services such as Outlook.com or Gmail for greater credibility and legitimacy. And finally, cybercriminals have moved to more multi-stage attacks in which they first engage by email, but then convince victims to speak or message with them over the phone where they can create more direct verbal trust, foster a greater sense of urgency, and where victims have less protection. They are using AI to generate these attacks, but often with the goal to get you on the phone with a live person.
We should also expect the rise of 3D attacks, meaning not just text but also voice and video. This will be the new frontier of phishing. We are already seeing highly realistic deep fakes or video impersonations of celebrities and executive leadership. As this technology becomes more widely available and less expensive, criminals will leverage to impersonate trusted contacts of their intended victims. In 2024 we will assuredly see a rise of 3D phishing and social engineering that combines the immersion of voice, video, and text-based messages.
The Rise of Quishing and QRL Jacking
Another new twist involves the malicious use of QR codes, including quishing and QRLJacking. QR codes, or quick response codes, have become ubiquitous in recent years. Quishing adopts phishing techniques to manipulate QR codes for cyberattacks.
A typical quishing attack involves the attacker generating a QR code embedded with either a phishing link or malware download that is distributed through phishing emails, ads, social media, restaurant menus, posters, etc. In August 2023, researchers uncovered a phishing campaign that used malicious QR codes to target large companies, including a major U.S. energy firm. Similarly, QRLJacking, or quick response code login jacking, is a social engineering method that exploits the “login with QR code” feature used by many apps and websites, which can lead to full account hijacking.
Long-Range Concerns About Nation-States and Even Self-Aware Bots
It may sound like the plot of a science fiction thriller, but soon we absolutely will see the rise of generative AI-fueled malware that can essentially think and act on its own. This is a threat the U.S. should be particularly concerned over coming from nation-state adversaries. We will see
attack patterns that get more polymorphic, meaning the artificial intelligence carefully evaluates the target environment and then thinks on its own to find the ultimate hole into the network, or the best area to exploit, and transforms accordingly. Rather than having a human crunching code, we will see self-learning probes that can figure out how to exploit vulnerabilities based
on changes in their environment.
The final piece is the use of AI by nation-states for surveillance and espionage, and ultimately to become the arbiter of the truth for thought control. If the source of an AI answer is unknown and opaque, but the public is only given that one answer by the arbiter of truth, then the leadership can always give you what they want you to know or hear – and now you have thought control.
By applying large language models (LLM) with computer vision tools and natural language processing, we will see rapid development as we move out to more self-aware bots. That presents the classic philosophical sci-fi question of where do humans fit in with these super smart machines? As a result, we will see the use of these AI tools for more nefarious purposes that are increasingly more targeted and successful.
Bad actors will be able to do these things at scale with near zero cost, so companies will need to rethink their security roadmaps and the tooling they have used historically. This brings up the common theme of “shift left” in security, meaning building defense right into the code by conducting testing in the software development phase. Security is a multi-layered discipline to protect code throughout its lifecycle, so it is better to build security upstream to protect against downstream exploits.
The second big change is that everything in security needs to become more human ID-centric rather than network-centric. At the end of the day, we are far better off by providing access through human identity-centric methods and using AI to make that human a super-human. So rather than relying on a training simulation approach for users, we can rely on AI augmentation for that, so users don’t have to be tricked into clicking on bad phishing links, for example.
We have to shift our posture from a network-centric to a human-centric security posture. We will put an AI bubble around the user to become a super-human with an extra pair of computer vision eyes, and an ability to listen with spoken language contextualization by using AI. Everyone has talked about a personal co-pilot to help from a security posture, and we will see the rise of these AI co-pilots to augment humans and help users make the best decisions.
This problem will not go away and will only get worse. Anywhere there is money and opportunity and data, which is across every industry, there will be attacks. This is a horizontal problem for all
industries, not a vertical problem. The bad guys will always look for wherever the most sensitive data is based to target their attacks.
Philip George, Executive Technical Strategist, Merlin Cyber
Post-Quantum Cryptography Will Divide Organizations into Two Groups – Prepared and Unprepared
This year, CISA, the NSA, and NIST have been leading the charge on Post-Quantum Cryptography (PQC) initiatives, publishing fact sheets and other helpful resources to address threats posed by quantum computing. Next year, NIST is set to publish its first set of PQC standards. This is an early step towards preparing federal agencies as well as private companies to adopt new encryption standards that are designed to protect systems from being vulnerable to advanced decryption techniques fueled by quantum computers. However, the need for this shift is much more immediate than much of the language and rhetoric currently surrounding PQC might suggest. In 2024, we will see a clear divide between companies and government agencies taking this threat seriously and beginning the proper preparations, and those that will find themselves sorely behind the eight ball.
NSA and other authorities have previously said the quantum risk is feasible by at least 2035. Commercial quantum computers do indeed exist today, although they have yet to demonstrate the projected computational scale without significant limitations. However, it is only a matter of time before our Years to Quantum (Y2Q) become months and days – not years.
Impending cryptanalytically relevant quantum computer (CRQC) capabilities should serve as a wake-up call for those in the IT & cybersecurity community who consider quantum computing to be in our distant future. We need to be careful that the forward-looking term “post,” which has become synonymous with quantum computing, does not lead us down a precarious path of complacency. This threat is much closer than most realize and employing an effective mitigation strategy will require more collaboration and effort than expected.
A key action for IT and OT system owners to perform now is to establish an integrated quantum planning and implementation team, with the goal of identifying critical cryptographic interdependencies and creating an implementation plan.
Since organizations are ultimately responsible for their own PQC readiness, or lack-thereof, to delay inventory and discovery activities until the new PQC standards are finalized is to invite an inordinate amount of risk to its information security and underestimates the overall level of effort. The need for early planning and execution is predicated upon the fact that cyber threat actors are targeting encrypted data today – for decryption tomorrow (known as store now, decrypt later) – and crucial data with a lengthy protection lifecycle (Controlled Technical Information and Unclassified Controlled Nuclear Information, for example) will likely be impacted the most.
The era of implicit trust across a rigid cryptographic ecosystem is coming to an end. In 2024, agencies and organizations who have executed a comprehensive cryptographic inventory will move to ensure their major (zero trust) modernization efforts incorporate cryptographic agility. Which will provide a means to directly manage cryptographic risk via by policy while reducing the time and effort required to transition, to new and evolving post-quantum standards. Whereas organizations that have delayed the execution of an automated cryptographic discovery and migration strategy will quickly come to the realization they are dangerously behind and unprepared in addressing the growing risk of quantum computing.
Alex Hoff, Chief Strategy Officer and Co-founder, Auvik
Third-Party Data Sharing Will Raise Risks of Security Breaches
More third-party SaaS vendors and cloud platforms are increasingly involved in security incidents. These vendors are creating a compounding and growing set of accessible company information on the Dark Web, which causes a cascading effect. The more information available, the more likely that information can be used to breach an organization. In this environment, having an accurate inventory of what systems are being used by your organization becomes critical for maintaining operational efficiency, but also to help identify all your risks related to third-party suppliers and service attacks.
In the digital world we all live in, data flows within and between just about every service we use. Far too often, when a breach happens, security teams and IT leaders don’t know their own exposure in terms of corporate data and assets until it is too late. It’s critical to understand all the risk factors, and follow the best practices for security, training, and compliance. In the case of homeowners, having a fire extinguisher and an alarm system are the best practices for safety and security. That doesn’t mean you won’t experience a fire or break-in, but your odds are much better when you can make continual progress to maintain strong compliance and security frameworks. If you take the necessary steps and follow best practices, you will lower your attack surface.
Joni Klippert, CEO and co-founder, StackHawk
Enterprise organizations are ready to shift left
In 2024, organizations are going to place more onus of application security testing in the hands of software engineers who are closest to the code. With the proliferation of APIs in 2023 continuing into 2024 and beyond, it’s clear that organizations that have not figured out how to test and remediate vulnerabilities in pre-production phases are facing an enormous amount of risk. Organizations will place greater emphasis on shifting security left as they recognize the need to prioritize testing APIs prior to production. As a result, application security vendors will develop solutions that address this emerging pain point by providing organizations with complete visibility into their API and web application attack surface, and providing insights into how often it’s being tested.
Claude Mandy, Chief Evangelist, Symmetry Systems
During 2024, Cybersecurity teams will begin to create dedicated roles to curate, mature and constantly improve the response from AI-powered co-pilots.
Cybersecurity teams have already recognized the value that AI powered “co-pilots” can bring to organization’s by enabling on-demand security input at scale. With this comes a critical need for dedicated roles within their cybersecurity teams to curate, mature and constantly improve the responses from these large language models LLM’s.
By the end of 2024, a Large Language Model will be named in at least one forensic incident response report - due to the LLM’s use in a large-scale cybersecurity incident.
It’s not surprising to predict that generative AI and large language models (LLMs) will be utilized by cybercriminals and nation states to augment their existing attacks and information operations, but we expect that at least on forensic incident responsder will go the extra step to determine which LLM was used to make the content and material (including voice and video) appear more legitimate.
By the end of 2024, There will be a concerted effort among vendors to address potential misuse by cybercriminals through identity proofing, threat intelligence capabilities and reduction of free tier capabilities.
Recognizing the overlap of criminal misuse with the benign applications of LLMs, particularly in tasks like drafting emails or generating content, vendors will explore multiple strategies to prevent malicious use by implementing robust identity proofing measures, integration of threat intelligence capabilities and reduction of free tier capabilities.
Through 2024, there will be a significant increase in attempted extortion attempts that are proven to utilize aggregated data from previous breaches.
It is well known that cybercriminals have collected and are selling vast amounts of data aggregated from previous data breaches. It is seemingly inevitable that cybercriminals will look at other ways to monetize this collection of data, and we expect to see more and more attempts to extort money from these historical data breaches. It is hard for organizations without the appropriate data breach investigation and response capabilities to quickly determine the veracity of compromised data, when confronted with an extortion attempt.
On the surface, the data may appear to originate from the organization and is indicative of a breach, but the data may not necessarily be from a current event, but patched together from multiple prior breaches. With imminent SEC rules putting greater pressure on organizations to disclose suspected material breaches quickly, organizations will be under pressure to verify the compromise quickly, to be able to hopefully refute the attacker claims or be forced to disclose suspected material incidents.
Eli Nussbaum, Managing Director, Conversant Group
Generative AI will continue to evolve, and even broader adoption will occur. Organizations that have been slow to adopt generative AI as well as those that have already dived into the trend will likely further leverage the tool. Additionally, new refinements and derivative tools will make appearances. AI is certainly ahead of its skis as far as security controls go, and everyone—including threat actors—is working to take advantage of this force multiplier. As these tools become more deeply ingrained in operational, strategic, and tactical processes, security breaches/data exposure incidents may become more impactful and high profile, shining a light on the need to secure tools ahead of adoption. We will also likely see even more instances of generative AI-based inaccuracies due to AI “hallucinations” as well as maliciously publicized information that leverages AI to appear accurate and true (information based in AI but not in fact).
Generative AI will further obscure the attribution of threat actors. One method of determining threat group origins is analyzing ransom note language for potential country of origin or affiliate group. Threat actors are using Generative AI to draft these notes now and will potentially accelerate its usage so that their language appears indistinguishable from any other native language speaker.
Threat actors will leverage AI in social engineering, using voice and image “deep fakes” to gain access to corporate IT credentials. This trend began in late 2022, and we anticipate this will escalate in 2023, causing organizations to refine help desk procedures to better vet potentially fraudulent requests