Solution Spotlight – Control Foundations

Our Control Foundations Solution enables security teams to define an organization’s controls through a PolicyasCode framework.

Security controls enable an organization to continuously evaluate the security of their resources, identify potential vulnerabilities across their AWS ecosystem, and prevent vulnerabilities from being introduced to their environments. 

Within the cloud there are many layers at which vulnerabilities present themselves. From account level permissions down to application code, and at every layer in between, organizations strive to implement tools and processes to mitigate these vulnerabilities. However, with an attack surface that is more expansive than ever, many organizations come up short with their security strategy and face financial and reputational ramifications. 

To solve the challenge outlined above, we will be taking a modularized approach by presenting the different security tools and controls solutions as individual Baselines that address different types of vulnerabilities across the AWS cloud environment. Since these Baselines are independent of one another, it makes them easy to implement as self-service solutions using services such as Service Catalog  

Our goal is always to automate to higher levels for better maintainability, auditability, repeatability, and reusability, so our Control Foundations rely heavily on a technique called “Policy as Code.” Policy as Code (PaC) is a relatively new approach to building and maintaining security and compliance controls. It is similar to the Infrastructure as Code (IaC) approach familiar to organizations that use CloudFormation, Terraform, Helm Charts (and Kubernetes generally), Ansible, Docker, etc. because it seeks to bring processes that were historically managed manually through runbooks and manual reviews into the domain of version-controlled software code. 

We regularly refer to tools like Open Policy Agent (OPA) in this foundation, and we ultimately outline our primary means of implementing PoC at scale using a pattern called the Control Broker, which we describe in detail in a later section.  

The following types of controls make up the Controls Foundation solution. Organizing controls by type helps us describe the stages of the application lifecycle they target and the types of components we can use to implement these controls.  

  • Preventative Controls – Controls that prevent resources from being configured with vulnerabilities. 
  • Detective Controls – Controls that identify vulnerabilities that get past the preventative control barrier and currently exist in the environment. 

Preventative Controls 

Preventative Controls are guardrails that are integrated into IAM permissions policies and infrastructure deployment processes to prevent resources from being deployed with vulnerabilities. The following baselines make up our preventative controls strategy. 

CI/CD Pipeline Integration – Regardless of the security controls an organization adopts, they should be implemented in a CI/CD pipeline. This allows organizations to integrate controls into the critical path to deployment and ensure deployments are blocked if they don’t satisfy the controls’ requirements. Without a CI/CD pipeline, controls would have to be deployed ad-hoc which is a much less repeatable and scalable approach. 

IAM Least-Privilege Enforcement – Before users ever interact with a piece of AWS infrastructure, they are restricted by AWS permissions and should only have the permissions necessary to perform their jobs – this is called the principle of least privilege. Creating a least privilege strategy typically involves the careful creation of an AWS IAM policy, user, role, or organizational SCP, so we consider AWS IAM to be the cornerstone of preventative security. We recommend that the IAM policies are tested thoroughly, continuously, and in an automated fashion as part of the software development lifecycle. This can be accomplished in the build stage of the CI/CD pipeline using policy simulations and/or unit tests. Policies that do not pass these tests should fail at the build stage and not be deployed. 

Automated Infrastructure-as-Code Assessments via Policy as Code – AWS Service Infrastructure security and compliance is in an exciting period of growth right now thanks to both to the widespread acceptance of Infrastructure as Code (IaC) and the introduction of purpose-built Policy as Code (PaC) tools. These developments give organizations the power to discover and prevent vulnerable infrastructure configurations before they are ever deployed. This can save organizations from catastrophe without any human intervention. 

Automated Application Security Controls – Enterprise-wide control frameworks often downplay the role of automated controls in application security because it is difficult to make broad recommendations that fit all the diverse applications organizations will create. Regardless, there are still some controls that apply to most cloud applications. For instance, we can use Policy as Code (PaC) techniques to ensure the data systems that applications use (such as S3 buckets) are encrypted at rest and that interactions with those data sources are encrypted in transit. We can also mandate rotation of credentials using services such as AWS Secrets Manager and help prevent the exposure of secrets in code using tools like git-secrets. For systems such as containers and EC2 instances, we can use a combination of tools like Amazon Inspector and static analysis tools like Clair. 

Components

  1. Service Control Policies (SCP) – Used for Organization-wide Permission Control. Organization-level policies that act as guardrails to limit the actions users can take across all the accounts in the AWS Organization. 
  2. IAM Entities – Used for Fine-grained Least Privilege Control. These include roles, users, and groups. These should be continually audited and adjusted to ensure the principle of least privilege is being met. 
  3. IAM Policy Simulator – A tool provided by AWS that can be used to model the interactions of IAM entities with hypothetical resources. This tool is provided as both a web application and API. With the API we can write automated tests based on the results of running our policies against the Policy Simulator. 
  4. Unit Testing Framework – More complex interactions with IAM policies can be tested by creating a test infrastructure in an isolated environment and evaluating actions against those resources using a real principal that has assumed the policy for testing purposes. 
  5. CodePipeline, CodeCommit, CodeBuild, and CodeDeploy – Preventative Controls are integrated into a CI/CD pipeline. These tools provide instantaneous building, testing, and deployment of each preventative component. Application teams can instantiate their controls and integrate them into their existing workflows or build out new, purpose-built controls pipelines. For more information about pipelines, what they are, and their importance, please refer to our Pipeline Foundations Solution Spotlight
  6. Open Policy Agent (OPA) – Even if extensive Policy as Code evaluation is not performed across all infrastructure code, basic invariants like encryption enforcement (such as encryption and access assertions) can be implemented easily by running OPA against data-layer (e.g. S3, RDS, etc.) IaC stacks. 
  7. Amazon Inspector – Automated security assessment service that evaluates EC2 systems and application security during acceptance testing. 
  8. Clair – Open-source project that enables static vulnerability scanning of application containers before they are deployed in an orchestration solution. 
  9. git-secrets – Open-source project from AWS Labs that prevents developers from committing sensitive information to a git repository. 
  10. AWS Secrets Manager – Secrets management service that ensures secure, auditable storage, retrieval, and rotation of sensitive parameters. 
  11. Control Broker – Chunks of IaC are evaluated during CI/CD pipeline executions by the Control Broker, which is maintained separately from the CI/CD pipelines and provides an independent, fast (usually sub-second) decision on whether a piece of IaC is compliant with the organization’s policies. CI/CD pipelines used by application teams call out (over the network) to the Control Broker to obtain this decision. When OPA is integrated into a CI/CD pipeline, it can provide the same functionality as a manual security review in much less time, higher accuracy, and much greater repeatability 

How it works

The reference architecture below illustrates how Preventative Controls can be delivered via CI/CD pipelines. In this reference architecture, the security team owns the Controls Pipeline and creates and maintains the preventative controls that are integrated into the Application Pipeline.  

The primary purpose of the Controls Pipeline is to vend organization-wide SCPs and IAM entities into target accounts. Meanwhile, the Application Pipeline contains several Preventative Controls that evaluate its code at the build stage. Once the Application Pipeline is placed into to the application team’s account, the infrastructure and application code from their source control repository is tested during the build stage, and if it passes the security and compliance tests, it is deployed into the account. 

This approach allows the security team to handle security controls in each application account so the application teams can focus on building and deploying their application as effectively as possible. 

Figure – 01

Blueprint

  • Controls Pipeline Stack is an AWS CDK stack that creates a pipeline that continuously configures Macie, GuardDuty, IAM Access Analyzer, Config, and a Control Broker. The blueprint configures each of these services with example controls. 
  • Application Pipeline Stack is an AWS CDK stack that creates an AWS CodePipeline pipeline and deploys a sample application. The pipeline’s built-in OPA policies evaluate the compliance of the application resources and the pipeline deploys the application if it is compliant. 
  • Baseline SCPs is a CloudFormation template for creating baseline organizational SCPs. 

Detective Controls 

Detective Controls continuously detect resources that are not compliant with the organization’s policies. These controls are particularly useful for detecting compliance violations among existing resources due to resource modifications or changes in the organization’s compliance policies. 

While preventative controls should cover the same set of threats as Detective Controls, it is still possible for preventative controls to be misconfigured (e.g. an overly-permissive IAM poliy or a bug in an OPA policy) which could lead to vulnerabilities in the environment. To handle existing vulnerabilities, detective controls continuously observe the environment for ordinary operational mistakes, application code that introduces vulnerabilities, and cyber-attacks. 

IAM Permission Change Detection –  Establishing permissions via IAM entities is part of the Preventative step of implementing controls, but Detective Controls are needed to ensure these permissions do not change beyond acceptable boundaries once they are established. We recommend that organization’s continually monitor access both externally and internally. For external access, we recommend leveraging on IAM Access Analyzer to create reports about which entities outside the organization, or outside a specific account, can access the organization or account’s resources. We also encourage our clients to take a DevOps-inspired approach to monitoring permissions by creating solutions that continuously monitor the behavior of existing policies. Config’s custom rules can perform fine-grained, continuous checks on IAM roles to ensure that they are permitting access to only the resources they are expected to permit access to. 

Service Infrastructure Assessments – Detective Controls monitor changes to infrastructure once it has been deployed. A full solution for detecting non-compliance after deployment includes constant monitoring of individual systems, network, DNS hosted zones, AWS API, virtual instances, application, and data storage level changes. Different organizations will have different levels of granularity they wish to detect changes for, but we recommend organizations at least begin by deploying some sensible Config Conformance packs across all accounts by installing these Conformance Packs using AWS Organizations. For more complex requirements, we recommend the establishment of a Control Broker. 

Sensitive Data Classification and Security – While some components recommended so far cover basic data security, such as enforcing encryption using preventative (policy as code via OPA) and detective (Config rules) strategies, organizations storing and managing sensitive customer data need to know where that data is being stored, what kind of data it is, and whether it is accessible by the right parties. To this end, we recommend the installation of Amazon Macie with managed (and possibly custom) pattern matching to gain a view of which S3 data stores contain sensitive data ranging from plaintext AWS access credentials to driver’s license numbers. 

Threat Detection Baseline – While the Service Infrastructure Baseline covers detection of important resource changes, it cannot detect all types of activity an organization could classify as a security incident. GuardDuty is a service from AWS that requires little more than activation (which should be done at the Organization level if possible) to begin collecting data from the account and reporting on suspicious activity. GuardDuty currently focuses on EC2, S3, and IAM services. The types of threats GuardDuty can detect include suspected port scanning of EC2 instances, access to S3 buckets from known malicious IP addresses, and various types of anomalous behavior that may be normal under some circumstances but have been occurring in unusual contexts. Furthermore, Inspector can be installed on EC2 instances and configured to continually run network connectivity checks, detection of open ports, and more, to provide a consistent view of each system’s connectivity and allow acting in case it changes. 

Detective Controls for Applications 

Application security and compliance varies not only in different organizations, but also across applications in the same organization. Therefore, establishing Detective Controls for applications requires up-front analysis and planning to guarantee protection against the organization’s most significant threats to applications.  

While planning the application security strategy for an organization, it is best to begin with types of application vulnerabilities that are common across most organizations and the tools that can be easily leveraged. For instance, an issue that most organizations need to address is ensuring PCI and PII are not exposed. To solve this issue, a clear choice is to use Macie to monitor S3 storage to detect, and in some cases, remediate PCI and PII violations. 

Once the common application security domains have been addressed, organizations’ unique application needs must be met with custom solutions. For instance, one organization may maintain SSH hosts. In this case, SSH logs can be integrated with CloudWatch Logs to report on high numbers of SSH failures, populate CloudWatch Metrics, and trip CloudWatch Alarms. In another example, imagine an application depends on artifacts stored in an artifact repository hosted in AWS. A similar alerting strategy to the one used for SSH may be needed to detect and alert on changes to package versions to protect the software supply chain.

Components

  1. IAM Access Analyzer – Used for Access Change Detection. Continually evaluates IAM policies, including resource policies, to report effective access for external entities. Findings  can reviewed or integrated with EventBridge to create actionable alerts. 
  2. Custom Config Rules – Used for Access Change Detection. Custom Config Rules can be written to ensure effective permissions do not change when critical policies, users, groups, and roles change. For instance, we might verify that new principals cannot assume roles we have scoped to one expected principal, or that AWS entities cannot be added as principals to roles expected to be used only by services. 
  3. AWS Config Rules – Used for Baseline Rules. Rules that evaluate AWS resource changes to detect when best practices and/or compliance policies are no longer met. 
  4. AWS Config Conformance Packs – Used for Baseline Rules. Packages of AWS-managed Config Rules for common use cases. 
  5. Macie – Used for Data Handling. Continually evaluates sensitive data stored in S3 buckets. 
  6. GuardDuty – Used for malicious activity. Continually evaluates and alerts on malicious activity at the network, API, and storage levels. 
  7. Inspector – Used for Important System Configuration Changes. Continuously evaluates system (e.g. EC2) and application configuration and alerts on findings, which can detect important network resource and machine configuration changes. 

How it works

The reference architecture below depicts an example of how the baselines discussed throughout the Detective Controls section can coalesce to form a coherent solution.  

Figure – 02

Control Broker 

For advanced compliance assessment purposes, we have developed a solution called a Control Broker. The Control Broker is an architectural pattern used to build and maintain large libraries of complex controls that evaluate deployed resources whenever they change. It is hosted within an account in the organization’s Security Organizational Unit (OU) and is accessed any time that Open Policy Agent (OPA) is leveraged for preventative or detective controls. The Control Broker pattern’s flexibility allows it to be implemented for both AWS cloud native environments and other deployment environments (such as Kubernetes). 

The Control Broker aims to facilitate deployment and maintenance of complex compliance and security controls that are not covered by out-of-the-box solutions. It relies on a mapping between resource change events and control policies. The change events trigger respective AWS Config Custom Rules and the rule’s respective Lambda function makes the call to the Control Broker (written in OPA’s Rego language) to determine whether the change event was compliant or non-compliant. 

By implementing our Control Broker to map AWS Config Custom Rules to an expansive OPA rule library, our approach allows for the creation of a rich library of controls in one standardized, purpose-built language. 

Components

  1. Open Policy Agent (OPA) – Used for Policy as Code (PaC) Engine. General-purpose policy engine used to evaluate compliance based on policies that are defined as code. The policy language used by OPA is called Rego, and it enables security teams to develop automated control policies that validate resource configuration before deploying to accounts. While any application with hierarchically structured data can leverage OPA, there is also an offering called OPA Gatekeeper that is made specifically for Kubernetes workloads. 
  2. API Gateway – The Control Broker’s “preventative mode” should be implemented as an independent service consumable via HTTP calls from the components (primarily CI/CD pipelines) that need its services in “preventative mode.” This way, consumers can use the same application to evaluate their Infrastructure as Code that they rely on for Detective controls through Config. 
  3. Config – The Control Broker operates in “detective mode” as a library of custom AWS Config rules maintained by the security team. Once the organization sends aggregates the Config events of each account into the centralized security account, the Control Broker will detect and evaluate resource changes for all accounts in the organization. 
  4. Lambda – The Control Broker can be implemented with serverless techniques to provide on-demand compute in a discrete microservice. This also makes it easy to integrate with Config and other event-driven services. The logic of the Control Broker is implemented in such a way that it can detect (through parameters in its API interface) whether the calling component is implementing Preventative or Detective controls. The Control Broker knows how to evaluate Infrastructure as Code (as in the case of checking Terraform modules or CloudFormation templates for compliance) or AWS Config events depending on the situation. 
  5. S3 – Open Policy Agent policies defining the organization’s security and compliance controls are stored in a centralized S3 location and retrieved as needed by the Control Broker. 

How it works

The Control Broker provides a single implementation point for both preventative and detective controls and can seamlessly handle both scenarios.  

To provide preventative controls, the consumers typically make HTTP API calls to have their Infrastructure as Code artifacts evaluated for compliance during the build stage of their CI/CD pipelines.  

Figure – 03

  • Analyze infrastructure code inside the developer’s IDE prior to commit using OPA policies and git hooks. Analyze all committed code using git-secrets to prevent committing credentials. 
  • Statically analyze AWS Permissions – Evaluate IAM policies using a suite of unit tests (using AWS Policy Simulator or custom tests) to ensure expected behavior. 
  • Statically analyze AWS Service Infrastructure – Evaluate CloudFormation, Terraform, Helm Charts, Kubernetes configuration files, and any other hierarchical Infrastructure as Code using OPA to check for common insecure configurations. 
  • Statically analyze Application Configuration – Use a tool like git-secrets to find likely hardcoded secrets in application code. 

For detective controls, the Control Broker acts as a library of custom Config rules that evaluate configuration changes of resources in the environment. 

Figure – 04

  1. Detective AWS Config rule is triggered by a relevant change to an AWS resource. 
  1. AWS Config executes the Control Broker lambda function inside of the Security account by passing in the resource attributes and the name of the policy to evaluate the resource against 
  1. Control Broker pulls the corresponding OPA policy from the S3 library, evaluates the change event data against it, and returns the result to AWS Config. 
  1. AWS Config determines whether the resource is compliant or non-compliant. 

Blueprint

  • Control Broker is a reusable AWS CDK construct that creates a Control Broker for AWS Config. This is used by the Controls Pipeline here to create a basic S3 Bucket encryption requirement using the OPA policy here
  • EKS OPA Gatekeeper Example is a barebones example implementation of OPA Gatekeeper with a custom policy to create a Control Broker on Kubernetes.

Benefits

  • Fully automated system to prevent and detect and evaluate security and compliance 
  • Scalable and repeatable model across many AWS Accounts and application teams 

End Result 

The result of implementing the baselines outlined in the Controls Foundation is a strong base of security controls inside a framework designed to scale to meet future security and compliance needs. Due to the modularized, self-service approach, the security team can continuously add and adjust controls and deploy the changes to controls. The continuous adjustments and seamless deployments of these controls allows application and infrastructure teams across the organization to spend less time worrying about security controls and more time focusing on delivering high-quality work. 

Interested in learning more?

If you are looking to provide automation, consistency, predictability, and visibility to your software release process contact us today.

Posted October 20, 2021 by Eddie Peters and Sean Howley

Posted in: Solution Spotlights

Tags: , , , , , , , , ,


About the Authors

Eddie Peters is a Senior Cloud Consultant at Vertical Relevance. He has led teams in Application Modernization, DevOps, and DevSecOps projects to automate application deployment and testing, troubleshoot performance issues in distributed systems, and automate security controls using industry best practices for Global Financial Services firms. Eddie currently holds the AWS Certified Solutions Architect – Associate certification.
 
Sean Howley is an Associate Cloud Consultant at Vertical Relevance. He has experience in AWS Security and building out DevSecOps solutions for enterprise customers. Sean is a Certified AWS Solution Architect.


About Solution Spotlights

The Solution Spotlight series aims to deep dive into the technical components of Vertical Relevance’s Solutions. These solutions cover the prescriptive guidance that we provide our financial services, customers, for effectively building and deploying applications on AWS. Whether clients are just starting out with their cloud journey or looking to improve efficiency with the cloud, the Solution Spotlight series will provide insights based on the best practices we’ve developed through a combination of 20+ years of Financial Services business experience and 10+ years of AWS experience.


You may also be interested in:


Previous Post
Use Case: Self-Service Infrastructure Product Provisioning at Scale
Next Post
Use Case: Full-Scale Compliance with Policy as Code

About Vertical Relevance

Vertical Relevance was founded to help business leaders drive value through the design and delivery of effective transformation programs across people, processes, and systems. Our mission is to help Financial Services firms at any stage of their journey to develop solutions for success and growth.

Contact Us

Learn More