2023 certainly had its share of tumultuous events that shaped the perceptions of cloud customers everywhere — there were supply chain attacks, critical 0day vulnerabilities and advancements in both AI and AI security that all left their mark on how we approach cloud security. As the year came to a close, the Crying out Cloud team (Eden, Merav and Amitai) sat down to discuss what we felt were our most interesting podcast episodes and newsletter editions of 2023.
High Profile Vulnerabilities
Merav’s picks
Chrome vulnerabilities that weren’t actually Chrome vulnerabilities
Several critical vulnerabilities in Google Chrome were published in 2023. In a few cases, items that fell into the Chrome category were hiding much more interesting vulnerabilities. CVE-2023-4863 and CVE-2023-5217 caused quite a stir when they were initially published as vulnerabilities in Google Chrome, only to be later revealed as library vulnerabilities impacting an extensive list of other products, including Firefox, Slack and Signal. At first, we were concerned about the potential impact on cloud environments (would either of these become another massively exploited and widespread vulnerability affecting the underbelly of all modern software?). However, we soon realized that both vulnerabilities were mainly client-side, and thus unlikely to be exploitable on most affected cloud workloads, other than virtual desktops and possibly servers that handle images or video. CVE-2023-4863 in particular is interesting for another reason: two other CVEs were issued for the exact same vulnerability (CVE-2023-41064 and CVE-2023-5129), which was a source of much confusion for a few days and certainly goes against industry norms — the CVE system is more effective if vendors agree to use the same identifiers when their products are affected by the same issues, so that customers have a better idea of what’s going on. Fortunately, sanity prevailed and CVE-2023-5129 was promptly rejected by MITRE for being a duplicate (we’ve decided to allow CVE-2023-41064’s duplication shenanigans to slide this time and let it go with a stern warning, so that it may serve as an example for other would-be troublemaking CVEs).
MOVEiT Transfer 0days exploited by Cl0p to disastrous effect
On May 31, 2023, Progress published the first in a series of exploited remote code execution (RCE) flaws affecting MOVEit Transfer (CVE-2023-34362), in what turned out to be the first public sign of an operation seemingly going back as far as 2021. Shortly afterward, two additional vulnerabilities (CVE-2023-35036 and CVE-2023-35708) crashed the scene with SQL injection issues. Despite MOVEit's relatively rare presence in the cloud—installed in less than 1% of cloud environments according to Wiz data—it's still worth keeping an eye on. This was initially rumored to be the work of the Cl0p ransomware group, and they eventually claimed credit, indicating that they had been in the MOVEit game for a while. By the time this vulnerability was revealed, it seems that Cl0p had already gathered sensitive information from a host of companies, and they probably continued to do so even after a patch was released, as many companies simply didn’t patch in time. MOVEit Transfer proved to be a smart target for financially motivated threat actors, and we’re always on the lookout for vulnerabilities affecting similar products that might turn out to be equally valuable.
Security Incidents and Campaigns
Merav’s pick
Scattered-Spider and ALPHV caught targeting cloud environments
Our ears here at Wiz don’t often perk up at the sound of ransomware (since it has been relatively rare in cloud environments, historically speaking), but sometimes ransomware operators cross the line from on-prem to cloud, and that’s when we pay attention. In a series of incidents starting in late August 2023, financially motivated threat actors Scattered-Spider and ALPHV appeared to have evolved beyond typical phishing and extortion tactics, delving into cloud lateral movement and RansomOps, apparently for the first time. Scattered-Spider (also known as UNC3944, 0ktapus, LUCR-3, Scatter Swine and Octo Tempest) employs sophisticated social engineering techniques and malware to compromise credentials from their victims, enabling deeper access to target networks. Their collaboration with ALPHV (AKA BlackCat) made attribution quite challenging, as it became difficult to distinguish between each of their “contributions” in their joint operations.
This was our very first episode, so it holds a special place in my heart (even though our recording setup was still a work in progress, and it shows…), but I also like this episode because we discussed the December 2022 supply chain attack on CircleCI. I found this story to be especially interesting for several reasons. First, CircleCI and other CI/CD service providers are an incredibly valuable target for operations such as this. Second, this was arguably a highly successful supply chain attack in the sense that the attacker was able to breach many secondary targets by acquiring their access keys, but in actuality, the attacker seems to have chosen those targets very carefully rather than going wild. Third, the threat actor managed to gain access to the company’s network in the first place by compromising a 2FA-backed SSO session from an engineer, demonstrating that SSO providers must work to make this attack vector more difficult to pull off. And finally, AFAIK the threat actor remains unknown to this day — even though they managed to hack a vital service used by thousands of companies, many of which are effectively the backbone of the modern tech landscape.
This episode included a discussion on another interesting supply chain attack, this time against Okta. Beyond the potential impact of this attack, this event showed the importance of good data hygiene and taking responsibility for sanitizing customer data. Once they gained access to Okta’s environment, the threat actor figured out that Okta was storing unsanitized HAR files (basically recordings of browser activity) that customers were sharing with the Okta support team to help with troubleshooting. These HAR files sometimes contained customer session tokens for Okta’s platform, so they represented a veritable goldmine for the threat actor, who managed to reuse these session tokens to gain access to a few different organizations. Some of those organizations even came forward to publicly state that they were affected, which is a rare occurrence.
In a wild cybersecurity twist, Chinese hackers broke into a U.S. government agency's Microsoft 365 tenant by snagging a key from Microsoft themselves meant for verifying customer identities. By forging authentication tokens, they could target anything from regular user accounts to high-security government emails. Microsoft had to change their locks after this breach, but it reads like a cyber-spy thriller and raises big questions. Some of those are: what's the proper way to secure a root of trust? And is there ever a good reason for vendors to put critical logs behind a paywall? Microsoft’s investigation eventually revealed that the threat actor probably stole the key from a crash dump on one of their internal servers, showing the importance of scanning for sensitive data that might be lurking in the most unexpected of places.
Have you ever played out a story where hackers manipulate the underlying data used for training an AI model and wondered about the potential consequences? In this episode, we delved into a recent BlackHat USA talk by Will Pierce that addressed this concern, highlighting data poisoning yet again as a major risk to AI systems due to reliance on untrusted external training data (untrusted being the key word here). As we mentioned during the episode, this research explored two novel attack methods—"split-view” and “frontrunning” data poisoning. Pierce and his co-authors suggest countermeasures, like hashing expected content and randomizing data ingestion schedules, to make things much harder for would-be attackers. Amid heightened awareness these days of the evolving AI threatscape, the genuine risk of data poisoning and the importance of safeguarding training data integrity have emerged as significant focal points that are definitely worth our attention.
Ever wondered what it's like to be an honorary Burger King spokesperson or to find your Twitter handle mixed up in a North Korean cyber operation? Well, Chompie, the renowned hacker and computer security researcher, was kind enough to share these intriguing anecdotes and much more with us during our interview with her a few months ago. Diving into kernel post-exploitation techniques and beyond, the episode unfolded with tales of her unique path to a cybersecurity career, insights gleaned from her research into Amazon Firecracker tenant isolation, and the importance of virtualization security in general. Personally, I think this episode is a must-hear, if only to learn about how Chompie uses the Marie Kondo method for choosing her next research topic!
We had the enormous pleasure of interviewing the one and only Corey Quinn earlier this year. We spoke about the aforementioned Microsoft stolen signing key incident and what it might mean for cloud security long-term. We also discussed things that have changed about the cloud over the past decade (while other things have remained much the same, like misconfigurations). Corey also pointed out some cloud services that are unjustifiably underutilized by customers. Finally, we spoke about the relationship between security and cost in the cloud — the two are more intertwined than one might guess: building efficiently can both reduce costs and keep bad actors out of the environment, and monitoring expenses can reveal malicious actors hijacking resources for illicit purposes. It was great to get Corey’s perspective on these things, and this episode was unique since we diverged from our normal cloud security focus to discuss other aspects of the cloud.
Scott is a renowned cloud historian, one of the founding members of fwd:cloudsec (which might just be the best cloud security conference in the world, depending on who you ask, but especially if you’re asking me), and a principal cloud security researcher on the Wiz Threat Research team, where he helps cloud customers set secure defaults and get rid of pesky long-lived IAM User access keys, among many other things. We had a chance to speak with Scott about some of the history of fwd:cloudsec, the most interesting and unique talks delivered during the 2023 conference, and a bit about what organizing the conference looks like behind the scenes.
Check out the podcast
For more exciting discussions, interesting revelations, and useful best practices, tune in to the Crying Out Cloud podcast! We have great topics planned for 2024.
Wiz becomes the first CNAPP to provide AI security for OpenAI, allowing data scientists and developers to detect and mitigate risk in their OpenAI organization with a new OpenAI SaaS connector.
We've curated a collection of 10 AI security articles that cover novel threats to AI models as well as strategies for developers to safeguard their models.
Get a personalized demo
Ready to see Wiz in action?
“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management