Wiz Defend is Here: Threat detection and response for cloud

How to use AWS Resource Control Policies

Unlock the Power of AWS Resource Control Policies: Enforce Security and Streamline Governance Across Your Organization.

9 minutes read

AWS recently released RCPs (Resource Control Policies) which are the resource policy equivalent to SCPs (Service Control Policies). These allow you to set limitations across your entire AWS Organization on who can access your resources (or how), even when those principals are outside of your Organization. As an example, you could say that no one outside your Organization can access your S3 buckets, so even if someone in your Organization made an S3 bucket policy that should make it public, it would not actually be public. In this post we’ll discuss some use cases and how you can deploy them safely. 

If you’re already familiar with SCPs, then RCPs are simply a variation of that. They have length limits, just like SCPs. They don’t grant any privileges, but only define constraints, just like SCPs. They have various restrictions on the elements that can be used versus full IAM syntax, just like SCPs. They can be applied to the entire AWS Organization, or OUs, or individual accounts, just like SCPs.  

Sample use cases for RCPs 

There are two general problems with resource policies on AWS that RCPs can help with.  

  • Control over resource sharing: There hasn’t been a way to give someone the ability to modify a resource policy, but not let them share the resource too broadly. For example, you may wish to allow someone to be able to share resources between two different accounts that belong to your company, but not make that resource public. This is now possible. With S3 buckets, AWS did add S3 Public Block Access which could be used to prevent someone from making an S3 bucket accessible to everyone, but that didn’t prevent them from sharing it with a single unknown external account, and it didn’t help for other resource types.  

  • Policy standards across resources: There are some statements people wished all their resource policies had, but there was no way to enforce this. For example, you may wish to require all S3 buckets to be accessed via TLS 1.3 (a similar policy is discussed here). Previously, you would need to add a statement to every S3 bucket to deny access from anything less, but now you can create a single RCP that is applied across your entire organization. 

Generally speaking, until now you could not enforce mandates across your company on resource policies. Some companies have tried using auto-remediation for this problem, but RCPs offer benefits over that and other solutions, in that there is no window of potential abuse. 

Data perimeter 

An important use-case of RCPs is for setting up data perimeters. RCPs don’t add any capabilities that an existing data perimeter strategy couldn’t already have, but it makes it easier and better ensures coverage of the strategy. Previously you would need to ensure that every resource policy contained certain statements, but now you can set those statements via RCPs.  

AWS’s data-perimeter-policy-examples repository has been updated to include sample policies. It is highly recommended you review the sample policies there. I’ll discuss two of the capabilities.  

Prevent unknown accounts from assuming IAM roles 

One of the statements from the data perimeter example policies has the statement ID EnforceOrgIdentities. This contains very useful functionality to ensure that only your own accounts, or known vendors, can assume IAM roles into your accounts. Here is a simplified version of the policy that only includes this functionality:

{  
    "Effect": "Deny",  
    "Principal": "*",  
    "Action": "sts:AssumeRole",  
    "Resource": "*",  
    "Condition": {  
        "StringNotEqualsIfExists": {  
            "aws:PrincipalOrgID": "<my-org-id>",  
            "aws:PrincipalAccount": [  
                "<third-party-account-a>",  
                "<third-party-account-b>" 
            ] 
        }  
    }  
} 

This ensures that the IAM roles in your accounts can only be assumed by a principal within your organization, or third-party vendors that you might want to approve. So even if someone makes an IAM Role with a policy that should make it publicly assumable, it will be limited to access only from accounts you trust. This is very useful because it prevents two common problems: 

  1. It prevents someone from allowing an unexpected vendor to access their account.  

  2. It prevents someone allowing some form of shadow IT to access their account. This could be a personal AWS account that they use for testing that they decided to let access the production AWS environment, or an AWS account with a more legitimate business use case, but that someone setup outside of the Organization. 

The full statement in the data perimeter repo prevents many other types of access beyond just sts:AssumeRole as I’ve shown, but is necessarily more complex as a result.  

Restrict OIDC access 

Another very useful statement from the data perimeter repository is EnforceTrustedOIDCTenants. With this, you can limit what type of OIDC access is allowed into your environment, which is another way vendors or tools you use may access your environment. In a previous blog, we discussed a misconfiguration that is no longer possible where someone could leave out a sub condition in their IAM role trust policy when setting up an OIDC integration with GitHub Actions. That condition is supposed to be used to restrict access to a specific GitHub organization, repo, and branch, but someone could still use a conditional check that matches any GitHub organization, repo, and branch, by using a StringLike and a value of “*”

The following statements first ensure that only GitHub Actions can be used as an OIDC provider. You can add more providers to that statement as needed. The next statement restricts GitHub Action access from only the GitHub organization octo-org (you will need to replace this). Even if someone did not configure their IAM role trust policy correctly, the risk of exploitation is reduced by doing this. Similar RCPs can be made for many other OIDC integrations.

{ 
    "Sid": "EnforceTrustedOIDCProviders", 
    "Effect": "Deny", 
    "Principal": "*", 
    "Action": "sts:AssumeRoleWithWebIdentity", 
    "Resource": "*", 
    "Condition": { 
        "Null": { 
            "token.actions.githubusercontent.com:sub": "true" 
        } 
    } 
}, 
{ 
    "Sid": "EnforceGitHubOrgAccess", 
    "Effect": "Deny", 
    "Principal": "*", 
    "Action": "sts:AssumeRoleWithWebIdentity", 
    "Resource": "*", 
    "Condition": { 
        "StringNotLikeIfExists": { 
            "token.actions.githubusercontent.com:sub": "repo:octo-org/*" 
        }, 
        "Null": { 
            "token.actions.githubusercontent.com:sub": "false" 
        } 
    } 
} 

The data perimeter repo from AWS contains a few other use cases for RCPs that should be reviewed, along with the README for that directory. 

Prevent undesired actions when performed by external accounts 

One gotcha with SCPs is that they do not apply to principals outside of your Organization, but RCPs do, so sometimes people try to use SCPs to prevent things that may be circumvented by using an external account.  

For example, if we have an important S3 bucket, we may wish to prevent anyone from deleting the objects in it. You could use S3 Object Lock, but imagine someone uses an SCP in their Org to prevent s3:DeleteObject and s3:PutBucketLifecycleConfiguration. They might have a false sense of security with this because if an attacker gains Administrator access to the AWS account containing the S3 bucket, they could set the bucket policy to grant s3:* to an attacker owned account. Then from the attacker owned account, they could delete the objects in the bucket, because the SCP will not impact them.  

With RCPs, you can ensure that no one, not even in an account outside of the Organization, can delete objects in the bucket. Further, you can ensure that the bucket cannot be shared outside of the Organization in the first place.  
 

Deploying RCPs 

Deploying RCPs should be done much like SCPs. RCPs, like SCPs, do not have a “dry-run” or “audit” mode to be able to see what might break before the policy is enforced, so you should review existing access patterns. For example, if you plan to prevent public access to S3 buckets, you should ensure you do not have S3 buckets that must be public.  

One helpful technique is to review Access Analyzer to understand which resources are currently public or shared externally. This blog post from AWS describes how Access Analyzer can be used to review access to S3 buckets, but Access Analyzer now supports 15 services. RCPs however only support 5 services currently (s3, sts, sqs, secretsmanager, and kms), but all of them are covered by Access Analyzer. 

Access Analyzer can tell you which external principals have been granted access to resources, but not how they are allowed to access them, so depending on the RCP you wish to enforce, you may need to investigate further. CloudTrail events may help here, but be aware that CloudTrail does not record data events by default, and even if you enable data events, there are still some events that are never recorded by CloudTrail. For example, CloudTrail is not able to record sts:GetQueueAttributes

In addition to reviewing logs before you deploy an RCP, you should also review them afterward to see what may have broken. You should know what type of access should work, and what should be prevented, so you know what those error events look like and to ensure the RCP is preventing the type of access you expect.  

RCPs can be applied to individual accounts, OUs, or entire Organizations, just like SCPs. Similar to SCP deployments, you should test the RCP thoroughly in a sandbox account, and perform a rolling deployment where you promote the RCP from being applied to that individual sandbox account, to dev environments, then staging environments, and then production environments, and you may wish to apply it only to groups of production environments at a time.   

The purpose of doing the deployment like this is to try to catch any problems before you cause a production outage, and to limit the blast radius of a possible production outage. Applying an RCP to a production account is just like any other production change.  

One thing to be aware of with RCPs – which is obvious, but I worry will eventually bite someone – is that if an RCP is your sole protection against an issue, then removal or modification of that RCP could result in a security incident.  

For example, if someone had misconfigured an IAM role with a trust policy that made it publicly assumable, but was protected by an RCP that restricted access to only accounts in the Organization, then removal of that RCP would allow an attacker to assume that role. This could happen if the RCP was modified or removed, or if the AWS account that was being protected is moved to another OU or AWS Organization entirely.  The RCP is not going to travel with the account, so you need to ensure you continue applying the RCP to wherever it goes, or you need to copy the RCP protections into the resource policies of the resources within the account that is being protected. 

Another thing to be aware of is that like SCPs, when an RCP prevents an action, in the testing I did, the error message is only a generic AccessDenied, and does not indicate that an RCP prevented the action from succeeding. This will make trouble-shooting difficult for engineers that are not aware of the RCPs that are being applied to the accounts they work in.  

Conclusion 

RCPs are a very helpful addition for ensuring guardrails are applied to all resources in an AWS Organization. Similar to SCPs, however, their deployment should be well tested and not deployed across the Organization all at once, or you will risk breaking things.  

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management