CloudWatch-based Auto-Remediation of Public S3 Buckets

John Byrd
3 min readNov 11, 2018

As part of the Security Architecture at my company, we made the decision to stay away from the legacy management of S3 ACLs (object and bucket) entirely, and instead, rely solely on IAM and bucket policies.

Additionally, we made the broad statement that all publicly-accessible S3 resources would need to be allowed through CloudFront. In this scenario, the bucket would not be flagged as Public by S3 nor would the public S3 read/write Config rules identify the resource as Noncompliant (specifically the s3-bucket-public-write-prohibited and s3-bucket-public-read-prohibited Config rules.)

For all situations where a bucket was made public due to failure in preventative measures, we wanted to remediate the public S3 bucket. Initially looking at CloudCustodian, we realized the Block Public S3 Object ACL wasn’t addressing publicly available S3 resources made so by bucket policies. Taking a note from CloudCustodian’s deployment, we made a few modifications.

First, I created a rule in CloudWatch events that watched for compliance changes in the S3 public read/write rule. This reports all changes in compliance (from Compliant to Noncompliant and vise versa).

CloudWatch event pattern:

{
“source”: [
“aws.config”
],
“detail-type”: [
“Config Rules Compliance Change”
],
“detail”: {
“messageType”: [
“ComplianceChangeNotification”
],
“configRuleName”: [
“s3-bucket-public-read-prohibited”,
“s3-bucket-public-write-prohibited”
]
}
}

I created an SQS queue and started triggering compliance changes. Once I was able to check the queue and see the messages, we were able to determine how to parse the JSON message being produced by the Cloudwatch event. Below is an example of the message that ended up in the queue:

{
"version":"0",
"id":"efe14ec1-b7a1-3b32-54fe-fbceb8a27c2f",
"detail-type":"Config Rules Compliance Change",
"source":"aws.config","account":"111111111111",
"time":"2018-10-31T18:12:32Z","region":"us-east-1",
"resources":[],
"detail":{
"resourceId":"bucketname",
"awsRegion":"us-east-1",
"awsAccountId":"111111111111",
"configRuleName":"s3-bucket-public-write-prohibited",
"recordVersion":"1.0",
"configRuleARN":"arn:aws:config:us-east-1:111111111111:config-rule/config-rule-zzko7t",
"messageType":"ComplianceChangeNotification",
"newEvaluationResult":{
"evaluationResultIdentifier":{
"evaluationResultQualifier":{
"configRuleName":"s3-bucket-public-write-prohibited",
"resourceType":"AWS::S3::Bucket",
"resourceId":"bucketname"
},
"orderingTimestamp":"2018-10-31T18:12:30.867Z"
},
"complianceType":"NON_COMPLIANT",
"resultRecordedTime":"2018-10-31T18:12:31.827Z",
"configRuleInvokedTime":"2018-10-31T18:12:31.568Z",
"annotation":"The S3 bucket policy allows public write access."
},
"oldEvaluationResult":{
"evaluationResultIdentifier":{
"evaluationResultQualifier":{
"configRuleName":"s3-bucket-public-write-prohibited",
"resourceType":"AWS::S3::Bucket",
"resourceId":"bucketname"
},
"orderingTimestamp":"2018-10-31T13:48:36.352Z"
},
"complianceType":"COMPLIANT",
"resultRecordedTime":"2018-10-31T15:25:13.421Z",
"configRuleInvokedTime":"2018-10-31T15:25:13.172Z"
},
"notificationCreationTime":"2018-10-31T18:12:32.828Z",
"resourceType":"AWS::S3::Bucket"
}
}

Taking this information, we went to work on a Lambda to parse and take action against this event. We needed to ensure that any actions taken would only be done against the bucket identified as the resourceId when the newEvaluationResult showed a complianceType of NON_COMPLIANT.

Once this was determined, we wanted to make sure offenders were directed to Security Architecture for any questions. Since resource policies only have a limited selection for valid elements available for use, Sid made the most sense to use.

Below is the python code we eventually came up with to evaluate the message from CloudWatch. Since the CloudWatch event triggers off of all compliance changes, we didn’t want to risk any existing public buckets being broken as someone changed it to a compliant state. This was achieved by evaluating if the new status was noncompliant and take additional action only if that condition was met. In the instance that someone had made a bucket public with a bucket policy, their policy would be replaced by a secured policy that restricts access to just entities on the account.

import boto3
import json
def policy_check_JSON(jsondata,context):
s3 = boto3.client(‘s3’)
bucket_name = str(jsondata[‘detail’][‘resourceId’])
account_id = str(jsondata[‘detail’][‘awsAccountId’])
resource_name = “arn:aws:s3:::”+bucket_name+”/*”
policy_json = {“Version”: “2008–10–17”, “Statement”: [{“Sid”: “ContactSecurityArchitecture”,”Effect”: “Allow”,”Principal”: {“AWS”: “”+account_id+””},”Action”: “s3:*”,”Resource”: “”+bucket_name+””}]}
policy_json[‘Statement’][0][‘Resource’] = resource_name
policy = json.dumps(policy_json)
compliance = (jsondata[‘detail’][‘newEvaluationResult’][‘complianceType’])
if compliance == “NON_COMPLIANT”:
response = s3.put_bucket_policy(
Bucket=bucket_name,
ConfirmRemoveSelfBucketAccess=False,
Policy=policy
)
else:
pass

I’ll provide how I deployed this at scale in another article soon.

Appreciation to co-worker Cory for assistance with the Python code.

--

--

John Byrd
John Byrd

Written by John Byrd

Modernizing companies’ AWS security and governance programs at scale.

No responses yet