Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.
Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models.
Wiz Experts Team
5 minutes read
What is data poisoning?
Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models. Attackers try to slip misleading or incorrect information into the training dataset. This can be done by adding new data, changing existing data, or even deleting some data.
Systems that depend on data can become considerably less reliable and effective due to data poisoning. The following are some possible effects of these attacks.
Impact
Description
Biases introduced into decision-making
Malicious data can introduce biases that skew results and decisions based on the poisoned data set. For instance, incorporating inaccurate or biased data into a financial model can result in bad investment choices that negatively affect the organization’s financial stability. Similarly, biased data in the medical field may result in inaccurate diagnosis and treatment recommendations, possibly jeopardizing patients’ health.
Reduced accuracy, precision, and recall
Poisoned data can degrade predictive models’ overall accuracy, precision, and recall. Unreliable outputs and increased error rates may follow, compromising entire systems. This could entail focusing on the incorrect demographic in fields like marketing or overlooking real concerns in cybersecurity. The reduced effectiveness of these models undermines their value and can lead to significant losses.
Potential for system failure or exploitation
A system may fail or become vulnerable to additional attacks due to data poisoning. In a type of data poisoning known as a backdoor attack, certain triggers are introduced into the data set; when these triggers are encountered, the system behaves unpredictably, allowing hackers to bypass security measures or manipulate system outputs for malicious purposes.
In critical infrastructure, vulnerabilities introduced via backdoor attacks can have severe consequences. For instance, the attempts of the LAPSUS$ hacker group to poison AI model data used a combination of tactics, including setting up a backdoor to gain system access.
Understanding the mechanisms behind data poisoning empowers you to safeguard your organization against such attacks. Only then can you identify associated behavior, observe patterns, and devise the appropriate mitigation plan.
Injecting false data: Attackers manipulate a data set by adding fictitious or deceptive data points, which results in inaccurate training and predictions. For example, manipulating a recommendation system to include false customer ratings can change how people judge a product's quality.
Modifying existing data: Genuine data points are altered to introduce errors and mislead the system without adding any new data. An example is changing the values in a financial transaction database to compromise fraud detection systems or create miscalculations around accrued profits/losses.
Deleting data: Removing critical data points creates gaps that lead to poor model generalization. For example, a cybersecurity system may become blind to certain network attacks if data from the attacks is deleted.
In targeted attacks, malicious actorsaim to achieve specific outcomes, such as causing a system to misclassify certain inputs. Backdoor attacks fall into this category, where specific triggers cause the system to behave in a predefined way. For instance, a security camera system might be programmed to disregard trespassers using a specific disguise.
In non-targeted attacks, hackers seek access to any system they can break into and then figure out how to profit from the exploit. They are opportunistic in nature and not directed at a particular server, OS version, or framework. For example, a ransomware kit that scans open-source repositories for exposed secrets like API keys and access tokens will find all the secrets it possibly can. Threat actors will then search the list for opportunities to break into systems and hold data for ransom.
Real-world instances of data poisoning highlight the practical dangers of these attacks.
Adversarial attacks on language models
Studies have demonstrated that tampering with language models' training data can produce inaccurate or damaging material. For instance, injecting biased data into a language model could generate politically biased news articles.
Backdoor attacks on image recognition systems
In one paper titled “Data Poisoning: A New Threat to Artificial Intelligence,” MIT’s student AI group, LabSix, reportedly tricked Google’s object recognition AI into mistaking a turtle for a rifle. All it took was some minor pixel modifications. Such attacks could be used to bypass facial recognition systems in security applications.
Poisoning attacks in autonomous vehicles
Contaminated training data for self-driving vehicles can lead to unsafe driving behaviors, such as misinterpreting traffic signs. For instance, modifying data to misrepresent stop signs as yield signs will lead to accidents. These kinds of attacks can have devastating results in the real world and present yet another reason why data poisoning should never be taken lightly.
Defending against data poisoning requires a comprehensive approach. Combining robust data management with advanced detection techniques can make a big difference in countering threat actors.
Robust data validation
Strict validation procedures can stop the introduction of tainted data:
Data provenance: Monitoring the provenance and history of data helps locate and remove potentially harmful data sources; reliable data sources can prevent data poisoning.
Cross-validation: Validating the model on several data subsets uncovers anomalies and inconsistencies, lowering the possibility of overfitting to tainted data; this helps realize model performance within the expected improvement margin.
Anomaly detection algorithms
Sophisticated algorithms can uncover data anomalies that point to poisoning attempts:
Statistical methods: These find anomalies and trends that could point to data manipulation; clustering techniques, for example, can identify data points that highly deviate from the mean.
Machine learning-based detection: Another layer of protection, ML models can identify patterns common to tainted data; this helps keep tabs on metrics and the functioning of models working directly with data.
Regular system audits
Periodic system audits can guarantee data dependability and identify early indicators of data poisoning:
Performance monitoring: It is possible to identify unusual declines in accuracy, precision, or recall that may be signs of poisoning by continuously tracking system performance on a validation set.
Behavioral analysis: Analyzing system behavior on specific test cases or edge cases can reveal vulnerabilities caused by data poisoning; these vulnerabilities are caused by data being ingested from an unsolicited source not recognized by the organization.
Adversarial training
Adversarial techniques can teach systems how to detect and withstand poisoned data:
Adversarial examples: Introducing adversarial examples into training can help the system recognize and resist manipulation attempts.
Defensive distillation: Used in deep neural networks, student networks become accustomed to the normal output of a teacher network over time, spotting anomalies and strengthening its security posture.
Data integrity is crucial, as it continues to be the primary factor in decision-making across many industries, especially AI. Maintaining an advantage over competitors and guaranteeing the reliability and security of data-driven systems both depend on ongoing innovation and cooperation.
Wiz's AI Security Posture Management (AI-SPM) capabilities offer several features to detect and mitigate data poisoning risks in AI systems:
Full-stack visibility: Wiz's AI-BOM provides comprehensive visibility into AI pipelines, services, technologies, and SDKs without requiring agents. This visibility helps organizations identify potential entry points for data poisoning attacks.
Data security for AI: Wiz extends its Data Security Posture Management (DSPM) capabilities to AI, automatically detecting sensitive training data and identifying risks of data leakage. This helps protect against unauthorized access or manipulation of training data that could lead to poisoning.
Attack path analysis: Wiz's attack path analysis is extended to AI systems, allowing organizations to detect potential attack paths to AI models and training data. This helps identify vulnerabilities that could be exploited for data poisoning.
AI misconfigurations detection: Wiz enforces secure configuration baselines for AI services with built-in rules and AI risk management to detect misconfigurations. Proper configurations can help prevent unauthorized access to training data and models.
Model scanning: Wiz offers model scanning capabilities that can detect potential issues in AI models, including signs of data poisoning or unexpected behaviors resulting from compromised training data. Learn more ->
AI Security Dashboard: Wiz provides an AI security dashboard that offers an overview of top AI security issues, including a prioritized queue of risks. This helps AI developers and security teams quickly identify and address potential data poisoning threats.
By combining these capabilities, Wiz's AI-SPM solution enables organizations to proactively identify and mitigate data poisoning risks across their AI infrastructure, from training data to deployed models.
Data access governance (DAG) is a structured approach to creating and enforcing policies that control access to data. It’s an essential component of an enterprise’s overall data governance strategy.
Cloud data security is the practice of safeguarding sensitive data, intellectual property, and secrets from unauthorized access, tampering, and data breaches. It involves implementing security policies, applying controls, and adopting technologies to secure all data in cloud environments.
SaaS security posture management (SSPM) is a toolset designed to secure SaaS apps by identifying misconfigurations, managing permissions, and ensuring regulatory compliance across your organization’s digital estate.
Data risk management involves detecting, assessing, and remediating critical risks associated with data. We're talking about risks like exposure, misconfigurations, leakage, and a general lack of visibility.
Cloud governance best practices are guidelines and strategies designed to effectively manage and optimize cloud resources, ensure security, and align cloud operations with business objectives. In this post, we'll the discuss the essential best practices that every organization should consider.