Eliminate Critical Risks in the Cloud

Uncover and remediate the critical severity issues in your cloud environments without drowning your team in alerts.

Defense in Depth: Cloud Edition

Defense in depth is often considered a basic concept in any effective security strategy.

Wiz Experts Team
8 minutes read

What is Defense in Depth?

Defense in Depth is a security strategy that employs multiple layers of defense to protect information and information systems. This concept is based on the military strategy that defends a position by making it difficult for an enemy to penetrate through multiple defensive layers. In cybersecurity, Defense in Depth aims to ensure that if one defensive layer fails, others will continue to provide protection.

While "Defense in Depth" refers to the overall security philosophy of using multiple layers of security to protect assets, "Defense in Depth Strategies" specifically describe the various methods and practices implemented to achieve this layered security approach.

In essence, Defense in Depth is the guiding principle or concept, and Defense in Depth Strategies are the actual techniques, tools, and policies applied to embody this concept. These strategies detail how the various layers of defense are structured and interconnected to protect against and mitigate potential breaches at different levels.

Defense in depth is often considered a basic concept in any effective security strategy. Without interlocking layers of different defenses, organizations are at the mercy of a single vulnerability or misconfiguration, potentially causing the full compromise of critical resources. 

While this seems like an obvious requirement of on-premises defensive architectures, cloud environments are often left behind. The common lack of endpoint security solutions, minimal infrastructure management, and the outsized role of identity in cloud security operations, often create the false impression that defense in depth in the cloud is a bit of a myth. If compromising a single root credential to a cloud environment enables attackers to run the table, how could we possibly implement defense-in-depth?

4 Key Elements of Cloud Defense in Depth

While the cloud presents its own unique set of challenges, defense in depth in the cloud is at least as important and attainable, as defense in depth for on-premises environments. 

Adjusting this architectural approach to the cloud requires implementing similar elements to those familiar from on-premises environments, including robust monitoring with real-time alerts sent to the SOC for investigation and containment.

Four key areas are central to successfully implementing cloud defense in depth: access management, layered MFA enforcement, implementing dual control, and “detection and response in depth”. 

1.     Access management 

As any security professional knows, IAM is king in the cloud. The simple and accessible management of all resources in cloud environments through centralized IAM enables attackers to do tremendous damage by compromising a single credential. 

While this risk is certainly real, it should not be treated as inherently different from the challenge of protecting highly-privileged credentials in on-prem environments. A tiered access control model relying on restricted roles assumed for specific needs instead of widely privileged users is key to creating “depth” in cloud defenses.

Much like the 3-tier model can mitigate simple privilege escalation in Active Directory environments, granular least-privileged roles in the cloud go a long way in mitigating the impact of credential compromise. When no IAM users are granted immediate high privileges, but instead must assume different roles to perform different functions, the compromise of a single credential will not lead attackers to immediately compromising entire environments. 

These role assumptions should be restricted to allowed source IPs (ideally jump servers) for more sensitive operations. While some privileged roles may still be used, getting to them will require additional steps or additional credential compromises, which can be detected and prevented by the SOC as attackers work harder to achieve their goals.

A final important best practice for maintaining effective access management in the cloud is to completely avoid the regular usage of root accounts. While other administrative accounts can be limited through access management policies, a compromised cloud root account cannot be effectively hindered through a tiered access model. 

In the vast majority of cases, root accounts should not be used for any daily operations. Once a cloud account is created and relevant administrative users are created, root account credentials can be stored offline and only used in extremely rare cases, following carefully predefined protocols. Once root accounts are safely managed offline, all other access can be effectively managed to facilitate defense in depth.

2.     Layered MFA enforcement

As many recent attacks have shown, MFA alone is not a perfect solution. Attackers leveraging MFA fatigue, performing full man-in-the-middle attacks to steal MFA codes, or hijacking sessions already authenticated with MFA, have degraded some of the confidence in MFA as the ultimate protector of sensitive data. 

However, enforcing multiple layers of MFA across cloud environments can go a long way towards creating defense in depth. 

 The key to layered MFA is re-requiring MFA whenever access to potentially sensitive roles or systems is attempted. Under this model, even a hijacked or stolen session initially authenticated with MFA will not enable attackers to achieve all their goals without going through the MFA process again. 

While practical considerations prevent requiring MFA for every action, particularly sensitive actions – such as accessing uniquely sensitive data, performing delicate operations such as deleting, encrypting, or exporting systems or large volumes of data – can usually be put behind a further MFA requirement. 

As these operations are not extremely common, the additional nuisance to legitimate users will not be overwhelming, while the potential impediment to attackers is dramatic. Naturally, if MFA can be beaten once, it can be beaten several times. However, this layered approach to MFA reduces the likelihood of attackers quickly accessing key resources, creating additional opportunities (and extra time) for defenders to detect and stop the worst parts of potential attacks.

3.     Dual control

Effective access management and MFA go a long way towards creating defense in depth in cloud environments, but may not be enough against determined and persistent attackers. 

Hijacking existing privileged sessions and insider threats remain crucial risks not fully mitigated by these measures. As the cryptocurrency exchange incident proves, some assets are so sensitive that any attacker gaining control of them can result in the most devastating consequences. 

Combating this daunting risk demands acknowledging that some roles are simply too privileged for any single person, and some assets are too sensitive to be managed by any single system. 

Implementing the principle of dual control on the most sensitive systems and operations means obligating at least two separate and independent modes of authentication and authorization to perform certain operations. 

For manual operations this is simple to do, for example by requiring more than one user, each with their own credentials and MFA, to manually approve operations like deleting resources or transferring large sums of money. This is analogous to well-known financial controls like requiring two different authorized users to initiate and approve outgoing payment orders above a certain threshold.

Though less intuitive, the same principle can be implemented for automated systems, by ensuring that uniquely sensitive operations performed by applications are separated into distinct and independent authorization systems. 

Implementation will vary depending on specific systems, but let’s return to our cryptocurrency exchange friends for an example. To prevent similar future attacks, we implemented the following solution for large customer withdrawals: 

  • The customer initiates a request on two separate systems, simplified here as servers A and B.

  • Server A is the only machine able to trigger a withdrawal operation with a third-party partner, enforced through access control.

  • Server B (in a separate environment managed using different credentials) is the only machine with access to the credential necessary to sign the withdrawal operation.

  • Server B independently receives the withdrawal request and signs a valid operation, but cannot trigger it, instead sending the signed operation to server A.

  • Server A independently receives the withdrawal request, then validates the signed operation from server B and triggers the transaction.

This simplified architecture enables automated customer withdrawals, but avoids creating single points of failure by implementing dual control. The effective segregation of servers A and B means an attacker compromising either one of them will have no way of triggering malicious transactions without separately compromising the other separately-managed server to achieve this goal. While this additional compromise is attempted, defenders will have many more opportunities to detect and prevent a successful attack.

4. Detection and Response in Depth

Facilitating effective “depth” in cloud defenses requires adopting traditional approaches to detecting and containing threats in the cloud. 

The common lack of EDRs, firewalls, and physical network segmentation in cloud environments, unfortunately often leads to a “flat” attitude towards detection and response. Such an attitude often relies on a few log sources and automated containment mechanisms to detect attacks, making failed detections much more likely.

Instead, a mature cloud detection scheme must continuously monitor, enrich, and correlate logs from across the environment. 

Combining events from across the cloud control, identity, compute, data, and network planes with relevant application logs facilitates the implementation of advanced alerting to detect potential lateral movement and privilege escalation. 

When this comprehensive approach is taken to create “Detection-in-Depth”, it quickly lends itself to establishing equally “deep” response procedures. Instead of relying on single containment bottlenecks like disabling users, comprehensive alerting enables rapid and accurate containment at the resource level, such as by stopping compromised instances or restricting access to critical data. 

This joint “detection and response in depth” approach is key to fighting attacks while having minimal impact on production.

Real-world example: Multi-million dollar breach of cryptocurrency exchange

Responding to a large cloud incident at a cryptocurrency exchange, we witnessed attackers leveraging the lack of defense in depth to the extreme. 

Instead of a traditional attack vector – finding an initial foothold in the organization and then performing lateral movement and privilege escalation to reach critical assets – attackers went straight for the golden credential. 

Spending months on a sophisticated social engineering campaign impersonating mathematicians researching blockchain, attackers were able to establish a collegial relationship with the victim’s top crypto engineer. Only after this relationship was solidified, they sent the engineer a “data sharing program” to install, resulting in the execution of their malware on the engineer’s laptop.  

As this laptop was continuously used to access and manage “hot wallet” systems in the cloud, it continuously accessed credentials to sensitive cloud servers. Once these were compromised, it took attackers less than 24 hours to initiate malicious transactions and successfully steal hundreds of millions of dollars.

CNAPP and Defense in Depth Strategy

A cloud-native application protection platform (CNAPP), when employed effectively, delivers a comprehensive defense-in-depth strategy for cloud-native security. This strategy encompasses a layered approach, starting from preventing attacks to real-time detection and response.

Prevention:

  • Agentless Visibility: Unlike traditional tools, CNAPPs can gain valuable insights into cloud activity without requiring software installation on every device (agents). This allows for broader visibility and faster identification of potential vulnerabilities.

  • Risk Reduction: CNAPPs continuously scan your cloud environment for misconfigurations and insecure practices. By proactively addressing these weaknesses, you significantly reduce the attack surface for malicious actors.

Detection and Response:

  • Lightweight Workload Protection: CNAPPs deploy lightweight agents within workloads to monitor activity in real-time. This allows for immediate detection of suspicious behavior, potentially catching breaches as they occur.

  • End-to-End Visibility: A key strength of CNAPPs, as emphasized by Wiz, is their ability to provide full visibility into attacks across the entire cloud environment. This includes infrastructure, code, and workloads, enabling a comprehensive understanding of the attack scope.

  • Faster, More Efficient Response: With real-time threat detection and complete attack visibility, CNAPPs empower security teams to react swiftly and efficiently. This can involve automated responses like isolating infected workloads or blocking malicious traffic.

Continue reading

Secrets Detection: A Fast-Track Guide

Secrets detection is the process of identifying and managing sensitive information like API keys, passwords, and tokens within codebases to prevent unauthorized access and data breaches.

LLM Security for Enterprises: Risks and Best Practices

Wiz Experts Team

LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.