BlogGenAI risks to be aware of — and prepare for — according to Gartner®

GenAI risks to be aware of — and prepare for — according to Gartner®

The deployment of GenAI, LLMs, and chat interfaces expands potential attack surfaces and poses increased security threats.

3 minutes read

A look at the current market trends in technology shows that generative artificial intelligence (GenAI) is undoubtedly shaping the future. GenAI can automate a wide range of tasks, freeing up time for humans to focus on innovation. It can analyze large amounts of data, provide personalized experiences to customers, and serve as a powerful learning tool.  

However, it is critical to note that these advancements in GenAI pose significant security risks that technology providers must address. The deployment of generative AI (GenAI), large language models (LLMs), and chat interfaces, particularly those linked to third-party solutions beyond an organization's firewall, is expanding potential attack surfaces and posing increased security threats to businesses. 
 
GenAI, along with smart malware, will boost the efficiency of attackers, foster a higher degree of automation, and escalate the independence of attacks. In turn, this significantly enhances the tools at the disposal of both attackers and defenders. GenAI prompt injections and model mentoring also introduce data security risks that are challenging to mitigate effectively. 

Although these risks are cause for concern, they also present fresh opportunities for providers of security technology. A recent report by Gartner, that delivers actionable, objective insight to executives and their teams, highlights several of these risks and outlines the opportunities they create for cybersecurity innovation.  

Four major risk areas identified by Gartner 

As per our understanding, Gartner report emphasizes the importance of understanding and preparing for the security risks associated with GenAI. It identifies four major areas of risk: privacy and data security, enhanced attack efficiency, misinformation, and fraud and identity risks. 

Privacy and Data Security 

GenAI tools often require access to data for training and generating outputs. The lack of data anonymization techniques, if not used sufficiently and/or data is shared with third parties and with API authorization permissions management can lead to potential data leaks or breaches. Without explicit and monitored consent, there's a risk of violating privacy rights or data security mandates. Additionally, GenAI tools may also be vulnerable to data breaches, leading to unauthorized access or disclosure of sensitive information. 

Enhanced Attack Efficiency 

GenAI technologies can generate newly derived versions of content, strategies, designs and methods by learning from large repositories of original source content. GenAI has profound business impacts, including on content discovery, creation, authenticity and regulations; the automation of human work; and customer and employee experiences. Gartner expects that by 2025, autonomous agents will drive advanced cyberattacks that give rise to “smart malware,” pushing providers to offer innovations that address unique LLM and GenAI risks and threats. 

Misinformation 

GenAI tools are capable of producing seemingly credible and realistic new content in audio, video and textual format, and automation capabilities enable interactive attack possibilities. These capabilities can be used by malignant actors to spread fake information and influence people’s opinions on social and political matters in an increasingly efficient and automated manner. Social media channels run the risk of being inundated with fake information. 

Fraud and Identity Risks 

GenAI's ability to create synthetic image, video, and audio data also poses a risk to identity verification and biometric authentication services that focus on a person’s face or voice. If these processes are undermined, attackers could subvert account opening processes at banks or access citizens' accounts with government or healthcare services. This is a growing concern for both existing clients and sales prospects regarding the viability of their solutions if presented with deepfake images, video, and audio. 

To address these risks, Gartner recommends incorporating generative AI solutions into security products by building an updated product strategy that addresses GenAI security risks. This includes proactively exploring potential smart malware behaviors, focusing on improving methods of cross-product coordination and threat intelligence and enhancing the speed of information exchange via APIs about users, files, events and the like with adjacent prevention providers. 

The report also suggests that by 2025, autonomous agents will drive advanced cyberattacks that give rise to "smart malware," pushing providers to offer innovations that address unique LLM and GenAI risks and threats. 

Consider future opportunities and risks  

GenAI presents both significant opportunities and risks. As tech providers continue to navigate the changing AI market and the tools emerging from it, staying informed and prepared for potential security threats should continue to be a top priority. This will not only ensure the safety and trust of users, but also drive revenue by differentiating and addressing key security transformation opportunities presented by these risks. 
 


Download and read the full report

Gartner, Emerging Tech: Top 4 Security Risks of GenAI, Lawrence Pingree, Swati Rakheja, Leigh McMullen, Akif Khan, Mark Wah, Ayelet Heyman, Carl Manion, 10 August 2023. 

GARTNER is a registered trademark and service mark of Gartner, Inc., and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management