Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.
The short answer is no, AI is not expected to replace cybersecurity or take cybersecurity jobs.
Wiz Experts Team
5 minutes read
The explosion of AI models and products has many people wondering how their industries will adapt. According to reporting by Forbes, AI’s influence on the workforce is profound: 77% of interviewees are “concerned that AI will cause job loss in the next year,” and McKinsey & Company forecasts “AI-related advancements may affect around 15% of the global workforce.”
At Wiz, we’re passionate about cloud security and couldn’t help but ask ourselves: Will AI replace cybersecurity?
To help answer this question, we'll use research from various sources, including some discussed in a recent CloudSec360 webinar, "AI & Cybersecurity: The Current State of the Art and Where We're Headed." In this webinar Clint Gibler, founder of tl;dr sec and Head of Research at Semgrep, distills the hundreds of hours he's spent following how AI is being applied to cybersecurity into one hour-long talk.
AI is a useful part of a cybersecurity toolkit, but it’s not an all-inclusive solution. While AI can automate and enhance various cybersecurity processes, artificial intelligence can only augment, not replace, human expertise in the fast-evolving threat landscape. Let’s take a closer look at AI’s capabilities and limitations.
One benefit of AI is its ability to enable timely and automated threat identification and prevention at a large scale. AI models have the unique ability to learn patterns from vast amounts of data and can base decision-making on a quantity of variables and history that surpasses human capabilities.
Since AI models learn to recognize patterns, they can be trained to identify anomalies. In the sphere of cybersecurity, anomaly detection is the machine learning (ML) technique that identifies unusual or suspicious activities, patterns, or behaviors within a network or system that may indicate potential security threats or breaches. (From here on out, we will refer to anomaly detection applied to cybersecurity as “threat detection.”)
AI has exciting potential: Through automated and large-scale threat detection, artificial intelligence presents an opportunity to enhance cybersecurity systems by reducing response times and minimizing human errors.
Regarding specific applications of AI in cybersecurity, Clint covers the following AI security use cases:
Simply put, AI cannot complete threat detection by itself. AI requires human oversight and guidance in order to efficiently and reliably identify threats and prevent attacks, especially new types of attacks.
To understand why, we need to look at how threat detection is implemented in ML in some more detail. ML models can be supervised or unsupervised (sometimes, the two learning techniques are combined in a semi-supervised approach). Supervised models are trained on labeled data sets, while unsupervised models learn from data distributions only.
When AI is trained using data sets of known, labeled threats, the models are optimized to achieve high accuracy (or recall at a specific threshold). These supervised threat-detection models can learn threats that users have already experienced and labeled but struggle with novel threats.
When AI is trained using data sets without labeled threats, the models are optimized to identify deviations from normal behaviors. These unsupervised AI models for threat detection can discover both known and novel threats, but they have high false-positive rates, and all alerts need extensive expert analysis.
Pro tip
In a recent experiment conducted by the team at Trail of Bits, they concluded that "humans still reign supreme" when it comes to software security audits. For now, OpenAI's Codex is still unable to reason about the proper concepts and produces too many false positives for practical usage in audit tasks.
Threat detection, whether supervised or unsupervised, still requires human oversight and expertise to handle emerging cybersecurity behaviors and patterns. Humans can adapt to ever-changing threat landscapes, recognizing novel attack vectors, tactics, and vulnerabilities beyond AI’s capabilities. As opposed to AI, humans incorporate insights and intuition from non-technical or non-organized sources, such as geopolitical events, socio-behavioral patterns, and industry-specific knowledge, which are often vital to understanding the context of potential threats.
On top of this contextual understanding, the complex issues surrounding privacy, surveillance, bias, and accountability, along with the nuanced decision-making required to counteract complex attacks—such as Stuxnet’s attack on Iran’s nuclear program—demand a human touch.
Additional reading on the limits of LLMs in cybersecurity:
AI has a dualistic impact on cybersecurity as it provides both enhancements that simplify cybersecurity—and challenges that increase the need for cybersecurity.
How does AI simplify cybersecurity?
AI can support cybersecurity experts in monitoring and detecting threats at speed and scale. Through automated processes, AI frees human experts from repetitive and time-consuming manual tasks such as log analysis, enabling them to focus on more strategic aspects of cybersecurity.
Building and integrating cybersecurity AI models requires deep expertise, but third-party advanced AI solutions streamline the process. For instance, IBM’s AI for cybersecurity offering has helped companies reduce response times by around 85%.
A word of caution: Often, threat-detection models are unsupervised, and their high false-positive rate can lead to alarm fatigue, potentially diminishing the effectiveness of the security ecosystem.
Pro tip
Will AI help Attackers or Defenders more? Required reading:
AI introduces new challenges in cybersecurity, as seen by the creation of new cyberattacks at an unprecedented rate and scale.
Let’s look at some real-world examples. In 2019, a CEO fell victim to a voice deepfake, resulting in a staggering loss of $243,000. The following year, high-profile Twitter accounts were hijacked to perpetrate a Bitcoin scam that was rapidly disseminated by AI-powered bots. The infamous REvil ransomware group, active until early 2022, was known to use AI for their attacks. AI is also behind cyberattacks powered by Lockbit 3.0, released in 2022.
These instances are only a small sample of cyberattacks that leverage AI capabilities, and they pre-date the recent rise of large language models (LLMs). Malicious actors could now be using models such as WormGPT and FraudGPT to create hyper-realistic business email compromise (BEC) attacks and phishing web pages, or even polymorphic malware such as BlackMamba that can shape-shift to evade traditional defense mechanisms.
While we can’t easily control how people (mis-)use AI, we can control how we use AI for protection and how we protect AI, considering AI models themselves need to be secured to prevent exploitation by attackers.
The short answer is no, AI is not expected to replace cybersecurity or take cybersecurity jobs. AI’s capabilities and limitations clearly show that AI should be considered a supporting tool, not a replacement, for cybersecurity.
Nonetheless, the expertise required for cybersecurity roles is rapidly changing, driven by the current and continued impact of AI on the field. This evolution indicates that cybersecurity jobs will keep shifting from routine tasks to more strategic and complex efforts.
In this context, cybersecurity professionals should consider adopting AI tools as a complement to their skills, allowing them to shift focus to detecting new threats rather than protecting from existing ones.
This integration of AI to support cybersecurity efforts is particularly relevant for both practitioners and employers to mitigate the current talent shortage experienced in the market.
Safely integrating AI with cybersecurity involves embracing its potential while tackling the challenges it brings head-on. Throughout this article, we've analyzed the various facets of integrating AI with cybersecurity, highlighting both the opportunities and challenges this synergy presents. While AI can offer advanced threat detection, streamlined processes, predictive analysis, and timely incident response, AI models are not perfect.
Cybersecurity experts, in collaboration with industry experts and policymakers, are needed to supervise these models, fortify them against attacks, act on novel threats (including those enabled and powered by AI itself), address bias and ethical concerns, and uphold privacy standards.
The journey to safely integrating AI with cybersecurity starts with recognizing these considerations and upskilling cybersecurity experts to effectively use AI-driven security solutions and understand the risks of AI-powered cyberattacks.
As the field of cybersecurity embraces AI, human expertise remains vital. Explore how Wiz can help your organization innovate with AI while protecting your cloud AI infrastructure against attacks.
Develop AI applications securely
Learn why CISOs at the fastest growing organizations choose Wiz to protect their organization’s AI infrastructure.
Application detection and response (ADR) is an approach to application security that centers on identifying and mitigating threats at the application layer.
Secure coding is the practice of developing software that is resistant to security vulnerabilities by applying security best practices, techniques, and tools early in development.
Secure SDLC (SSDLC) is a framework for enhancing software security by integrating security designs, tools, and processes across the entire development lifecycle.
DAST, or dynamic application security testing, is a testing approach that involves testing an application for different runtime vulnerabilities that come up only when the application is fully functional.
Defense in depth (DiD)—also known as layered defense—is a cybersecurity strategy that aims to safeguard data, networks, systems, and IT assets by using multiple layers of security controls.