Discover the latest in LLM hijacking activity, including a dive into the JINX-2401 campaign targeting AWS environments with IAM privilege escalation tactics.
On November 26, 2024, Wiz Research identified a threat actor we track as JINX-2401 attempting to hijack LLM models across multiple AWS environments. With the increasing use of LLMs, attempts to abuse these models as a way to monetize unauthorized access to cloud subscriptions have become more prevalent, as indicated by prior public reporting on this sort of activity.
This particular incident involved the use of compromised IAM user access keys (AKIA) to gain initial access to a cloud account and invoke Bedrock models. While there are several known techniques for abusing LLMs (as reported by Permiso in October 2024), this blog focuses on a particular campaign conducted by JINX-2401 that we have so far identified across multiple AWS environments. As part of the attack, the actor utilized relatively unique techniques for privilege escalation and persistence, which we will detail in this blog.
Overview
While conducting threat hunting for known LLM abuse techniques and analyzing IOCs from the Wiz honeypot across multiple clients, we discovered an IAM user operating from a Proton VPN IP address. This user made numerous unsuccessful attempts to invoke Bedrock models, likely using a Python script. After these failed attempts, we observed multiple additional failed attempts to create IAM users and policies matching the same pattern, as evident in the error messages generated by their actions.
Expanding our investigation, we began searching for this behavior and the associated IOCs in other AWS environments. During this process, we identified an IAM user in a customer's environment employing the exact same techniques, again originating from a Proton VPN IP address. However, in this case, the IAM user had Administrator Access permissions. Despite their elevated permissions, the user’s attempts to invoke Bedrock models were also unsuccessful. Following these failures, the attacker used the high-level permissions of the IAM user to create new IAM users, along with IAM access keys and a policy granting Bedrock permissions. The newly created IAM users and policy names adhered to the same naming patterns we had previously identified with this attacker.
Next, the attacker created a console profile for the new IAM user, likely to complete the LLM agreement process. This activity was evidenced by the AWS API call PutUseCaseForModelAccess, performed via the console. Subsequently, the user attempted to request access to specific models using the CreateFoundationModelAgreement API call. However, these requests were blocked by the account’s Service Control Policies (SCPs).
Believing the setup was complete, the attacker tried to invoke the model again, this time using the new user credentials. These attempts failed as well, due to the SCP blocking the CreateFoundationModelAgreement and InvokeModel API calls.
Undeterred, the attacker created two additional IAM users following the same naming scheme and repeated the process with them. However, these were similarly unsuccessful.
Indicators of Compromise (IOCs)
In all cases we’ve observed so far, the actor created an IAM user and attached a newly created policy, using a consistent naming scheme:
The IAM username matched the following regex: ^[A-Z][a-z]{5}[0-9]{3}$.
The IAM policy name was `New_Policy`, and included the following permissions:
Inventory AI models deployed in your environment and check for irregularities. Wiz customers can use AI-SPM to identify which models have been deployed.
Detection
Query CloudTrail logs for the creation of IAM users or policies (CreateUser, CreatePolicy) with values matching known bad patterns. You can find the abovementioned patterns and many more in our public list of cloud IOCs available here.
Check whether IAM users or policies matching these naming schemes already exist in your environment.
Wiz Defend has detection rules to identify unusual access to LLM models and specific known LLM attack techniques:
Multiple InvokeModel Requests: Unusual principle making repeated InvokeModel requests within a short timeframe.
Bedrock Models Enumerated For Access: InvokeModel requests returning specific error codes indicative of access checks.
Creation of IAM Policy Linked to a Known LLM Hijacking Campaign: Detection of an IAM policy named "New_Policy" with Bedrock permissions.
Creation of IAM User Linked to a Known LLM Hijacking Campaign: Detection of an IAM user matching the regex pattern identified in the attacks.
AWS re:Invent 2024 brought an avalanche of announcements, with over 500 updates since November. Let's spotlight the most impactful ones for security teams, from Resource Control Policies to centrally managed root access.