Crying out Cloud: Our Favorite Stories of 2024

Check out our top podcast episode picks from the past year.

6 minute read

2024 certainly had its share of tumultuous events that shaped the perceptions of cloud customers everywhere — there were supply chain attacks, critical 0-day vulnerabilities, and advancements in both AI and AI security. All left their mark on how we approach cloud security. 

As the year came to a close, the Crying out Cloud team (Eden, Merav and Amitai) sat down to discuss what we felt were our most interesting podcast episodes and newsletter editions of 2024. Here are our top picks from the past year: 

High Profile Vulnerabilities 

Merav’s pick – XZ Utils backdoor 

CVE-2024-3094 is one of the most intriguing stories of the year. A stealthy backdoor was hidden in XZ Utils, compromising SSH authentication in certain Linux distributions. The attack was highly sophisticated, with obfuscated code that activated only under specific build conditions and included anti-debugging techniques to evade detection. Security researcher Andres Freund discovered it after noticing unusual SSH behavior, and we can only guess how far this attack would’ve gone without his vigilance. While only 2% of cloud environments were affected, this case is another reminder of the growing risks in open-source supply chains. 

Listen to the full episode here. 

Amitai’s pick – Selenium Grid misconfiguration 

As part of our investigation into the “SeleniumGreed” campaign, led by Avigayil Mechtinger, Gili Tikochinski, and Dor Laska, we discovered that threat actors were exploiting a very common misconfiguration of Selenium Grid deployments. Surprisingly, despite Selenium being very prevalent in the cloud, no one had published anything about this activity before. To me, this highlighted the importance of researching software misconfigurations, which tend to be overlooked by security teams (but not by threat actors or bug bounty hunters). 

Listen to the full episode here. 

Security Research 

Merav’s pick – SAPwned 

In July 2024, Wiz researchers discovered some serious security flaws in SAP’s AI Core platform, which they dubbed “SAPwned.” These issues could’ve let attackers get access to sensitive customer data and cloud credentials across major services like AWS, Azure, and SAP HANA Cloud. The problem came down to weak isolation—basically, attackers could run malicious AI models or training jobs that gave them way too much access. They were even able to read and modify internal Docker images and grab cluster admin rights on SAP’s Kubernetes environment. SAP fixed the vulnerabilities quickly after Wiz disclosed them, but this whole incident is a reminder that AI platforms need stronger isolation and sandboxing to keep tenants—and their data—safe.

Listen to the full episode here.  

Amitai’s pick – Azure’s firewall fumbles 

Researchers at Tenable discovered an interesting vulnerability affecting Azure, where firewall service tags could be abused to allow unintentional cross-tenant access. I found this research interesting because it touched on a very common issue and seemed to be the network layer equivalent of a confused deputy attack, which has been studied extensively and more-or-less solved through cloud provider mitigations at the identity layer. I’m a big fan of vulnerability variants. 

Listen to the full episode here. 

Eden’s pick – DeepSeek’s exposed database 

Gal Nagli, one of our researchers at Wiz, stumbled upon something wild—an exposed database at DeepSeek. All it took was a simple scan of their domains, and there it was: an unauthenticated Clickhouse database packed with sensitive data, including chat logs used for model training and API keys. The irony? DeepSeek was making waves for building AI models rumored to rival OpenAI’s, yet they missed one of the most basic security fundamentals. It’s a classic case of how rapid innovation can outpace security, and a reminder that even the most cutting-edge tech companies can leave the door wide open if they’re not careful. 

Listen to the full episode here. 

Security Incidents and Campaigns 

Merav’s pick – LLMjacking 

This year we saw the first LLMjacking attacks, and they presented us with a cool intersection of AI and Cloud environments. The earliest one highlights how cloud-hosted AI services can be exploited for financial gain. Instead of launching traditional AI attacks like prompt injection, attackers use stolen cloud credentials to access a variety of LLM services across AWS, Azure, and GCP. In one observed case, they exploited a publicly exposed Laravel instance to steal credentials and target Anthropic’s Claude models. To evade detection, they automated credential validation using a script that enumerates permissions without running actual queries. Additionally, they leveraged an open-source reverse proxy (OAI Reverse Proxy) to manage access to multiple compromised accounts while keeping the stolen keys hidden. Some attackers even used KeyChecker, a tool that assesses the AI-related value of stolen credentials by checking inference API access, quota limits, and content filtering policies. 

Listen to the full episode here. 

Amitai’s pick – Diicot 

In December 2024, the Wiz Threat Research team uncovered a malware campaign targeting Linux environments, linked to the Romanian-speaking Diicot threat group. What made this campaign stand out was how deliberately it was built to operate in cloud environments. The attackers tailored their payloads based on whether the target was a container or a traditional VM, and used weak SSH credentials to gain initial access. They also swapped out their old Discord-based command-and-control channel for a quieter HTTP setup, and tampered with UPX headers to avoid detection. It’s a clear sign they’ve been following previous reports and adjusting their methods—not to be admired, but to be taken seriously by defenders working to stay ahead. 

Eden’s pick – Ollama 

This Ollama vulnerability really struck me because it perfectly illustrates how even modern AI infrastructure with new codebases can fall victim to classic vulnerabilities like path traversal. What makes it particularly concerning is that Ollama—one of the most popular open-source projects for running AI models with over 70k GitHub stars—was shipping without authentication by default. When the Wiz team scanned the internet, they found over 1,000 exposed Ollama instances hosting numerous AI models, including private ones not in public repositories. The ease of exploitation was alarming: a simple malicious manifest file could achieve full remote code execution. It reinforces a pattern we're seeing across the AI space—security getting sidelined in favor of functionality and deployment speed. While Ollama's team fixed the issue within hours (impressive!), it's a stark reminder that as organizations rush to adopt AI tools for competitive advantage, basic security practices aren't keeping pace with innovation. 

Listen to the full episode here. 

Interviews with Security Thought Leaders 

Eden’s pick – Roy Reznik 

Talking to Roy Reznik about leadership at Wiz was a masterclass in breaking the mold. He shared that we don’t do traditional sprints or rigid development timelines—at all. Instead, we hire engineers who are truly passionate about what they do, trusting them to work efficiently without artificial deadlines. As Roy put it, “I’m lucky to be working on my hobby and getting paid for it.” That mindset has built an R&D culture where security isn’t tacked on at the end—it’s baked in from the start. Developers here don’t wait to be told to loop in security; they do it instinctively. Getting to see that in action every day is both humbling and inspiring. 

Listen to the full episode here. 

Amitai’s pick – Johann Rehberger 

We spoke to Johann about his work as Red Team Director at EA, the latest advancements in AI security research (including surprising prompt injection techniques), and how red teams succeed at what they do. We touched on the importance of gamification in research, ethical hacking and much more. This was a very interesting interview and one of my favorites from the past year. 

Listen to the full episode here. 

Check out the podcast and newsletter 

For more exciting discussions, interesting revelations, and useful best practices, tune in to the Crying Out Cloud podcast! We have great topics planned for 2025. 

For a monthly dosage of curated cloud security news, be sure to subscribe to the Crying out Cloud newsletter – every month, we pick the top stories reviewed by the Wiz Research team related to high profile vulnerabilities, cloud incidents, and more. 

Check out our favorite podcast episodes from 2023! 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management