AWS Security Pillar: A Well-Architected Cloud Environment
In today’s operating environment, security is critical and businesses must be protected from accidental and malicious threats. These risks can come from any direction and at any moment in time. Just like any other cloud providers, Amazon Web Services (AWS) has a shared security model. This model indicates what responsibility lies with the provider and what is the responsibility of the customer, so appropriate steps can be taken to ensure security.
Operational excellence applied to your cloud workload
Security must always be tailored to fit each individual business. To choose business-appropriate security controls, threat modeling and a risk assessment must be done. Once you have the results of those processes, you may need to revise your security decisions, especially within the cloud. Automated tools should be used to continuously scan your machine images, applications, APIs, or any other part of your infrastructure as code (IaC). You should be going through the exercise of threat modeling and assessing risk on a regular basis to ensure that you are up to date with the current threat landscape.
Cloud account management for architects
Unfortunate lessons have been learned by companies like Code Spaces, who saw their business devastated by cybercriminals who were able to compromise the company’s corporate AWS account. To reduce the risk of this happening to your business, access to the root account within AWS controlled with multi-factor authentication (MFA) is recommended. As well, separate accounts should be established for production, development, testing, because if one account becomes compromised, the others can remain secure.
AWS Organizations should be used to centrally manage all the AWS accounts within a corporation. When using AWS Organizations, it is critical to ensure all settings are appropriately chosen. To control all accounts appropriately, all features should be enabled.
Want to eliminate manually checks for adherence to AWS Organizations’ best practices? Have your AWS cloud infrastructure scanned for adherence to 750+ cloud guidelines by signing up for our free trial.
Security in the cloud
AWS defines security in the cloud across seven areas:
Focusing on these selected number of design principles will help you to build well-architected environments.
- Implement a strong identity foundation: Ensure that core security principles are followed, such as the principle of least privilege and the principle of separation of duties. It is exceedingly difficult to control access between applications, users, devices, and resources. Having solid policies and processes that define centralized identity management and authentication methodologies other than static credentials, such as passwords, is crucial. Controlling access can be difficult, and you must be diligent in watching over configurations (or have automation in place to do this for you) to ensure that Amazon Simple Storage Services (Amazon S3) buckets haven’t been granted ‘Full_Control’, for example.
- Enable traceability: When it comes to security incidents, the most important thing is knowing that an incident has occurred. Log collection and analysis, as well as metric tracking, are essential here. For example, enabling user activity logging on something like Amazon Redshift is a step in the right direction.
- Apply security at all layers: Defense in depth has been a security staple forever, as it helps to slow down attacks by detecting and preventing them earlier. To do this properly, security needs to be applied to virtual machines, operating systems, applications, virtual private clouds (VPCs), and more.
- Automate security best practices: It is human nature to make mistakes, but when it comes to security, every point that requires human interaction increases the chance of a security breach. When possible, it is always best to automate security controls to lessen the odds of an error occurring and enable more rapid expansion while controlling costs.
- Protect data in transit and at rest: Someone is always listening, watching, and waiting for data to be left in the clear. Why make it easy for cybercriminals? Always protect data with encryption, in transit and at rest.
- Keep people away from customer data: The more access granted, the more likely an account will be compromised. With so many data regulations, direct access to customer data needs to be tightly controlled.
- Prepare for security events: It will happen. There will most likely be a breach or compromise. Have teams and procedures in place to respond to those incidents. From detection to recovery, the more automated the tools are, more effective the teams will be that rely on them. A good starting point is to ensure you have the right subscribers to Amazon Simple Notification Service (SNS) messages. This is to ensure the right people get the messages and the wrong do not.
IAM is extremely crucial to protecting your business, as it allows organizations to control who can and cannot access data accounts. If someone or something cannot access our data, then they should not be able to alter or steal it, however, this is not the only tool we need.
When controlling access, we need to control both humans and machines. When it comes to access control for humans, we are talking about users, administrators, developers, and customers. While machines are usually less obvious, AWS placed all virtual machines, APIs, applications, servers, routers, and switches into this category.
According to AWS, here are five critical things that we should be doing with our identities.
1. Use strong sign-in mechanisms. Sign-ins pose risks when lacking additional security measures like multi-factor authentication (MFA), especially if credentials are compromised or easily guessed. Mitigate these risks by implementing robust sign-in mechanisms that enforce MFA and strong password policies.
2. Use temporary credentials. Prefer the use of temporary credentials over long-term credentials for authentication to minimize the risk of inadvertent disclosure, sharing, or theft of credentials.
3. Store and use secrets securely. To authenticate with databases, resources, and third-party services, a workload relies on secret access credentials like API access keys, passwords, and OAuth tokens. Employing a specialized service to securely store, manage, and regularly update these credentials minimizes the risk of credential compromise.
4. Rely on a centralized identity provider. Streamline workforce identity management by utilizing a centralized identity provider. This enables efficient access management across various applications and services from a single location, simplifying the process of creating, managing, and revoking access. Integration with HR processes allows seamless revocation of access for all applications and services, minimizing the reliance on multiple credentials.
5. Audit and rotate credentials periodically. Regularly audit and rotate credentials to restrict the duration for which they can be used to access resources. Prolonged use of credentials poses significant risks, which can be mitigated by implementing a routine credential rotation practice.
Protecting the AWS accounts you are managing access to is also critical. This starts with defining guardrails for the organization, which allows configurations with service control policies (SCP) to prevent the deletion of common resources. It is critical to ensure you have the right configurations within AWS Organizations. You can see the many IAM best practices on the Trend Micro Knowledge Base, such as ensuring access keys are rotated, multi-factor authentication (MFA) is enabled for the AWS root account, and that AWS IAM roles cannot be used by untrusted accounts via cross-account access feature.
As previously mentioned, it is critical to know when an incident occurs. Without this information, it is impossible to respond, fix, or correct. If detection takes too long and your response does not mitigate damage early on, you may be faced with higher fines by violating the regulations of GDPR or HIPAA. So, how can this be prevented? With proper configurations and investigations.
Having the proper configuration for your systems to log and alert your network operations center (NOC) or security operations center (SOC) is the first step. AWS offers a variety of tools to build a comprehensive and automated detective environment. These include:
- AWS CloudTrail—creates a record of all account activity.
- AWS Config—provides you with a detailed inventory of your AWS resources and their current configurations. It also allows for auto-remediation if actions are taken to change configurations in appropriately.
- Amazon GuardDuty—Think of this as your guard dog. It will monitor your cloud, looking for malicious activity and unauthorized behavior.
- AWS Security Hub—A tool that gathers, organizes, and prioritizes notifications, alerts, and findings from both AWS and third-party products.
The challenge is to ensure those products are properly configured. Trend Micro ingests the data from these services and products (along with 90 other AWS cloud services and resources) and automatically checks for misconfigurations from the Trend Micro Knowledge Base.
Interested in knowing how well-architected you are? See your own security posture in 15 minutes or less.
It is critical to have the ability to investigate and respond to incidents. When an incident is detected, there should be a playbook of processes for investigation. This will enable teams to respond effectively to an incident. However, this is just the first part. The next step is having an automated response configured for certain events.
Infrastructure protection is broken down by AWS into network and compute protection mechanisms. Network protection involves traditional tools such as firewalls and access control lists. Compute protection involves tactics such as code analysis and patching.
Network protection mechanisms start with the traditional security concept of defense in depth. Having a single protection mechanism in front of a resource is not sufficient. As we construct our networks, it is necessary to ensure that logging and alerts are enabled, so responses can be initiated immediately. AWS offers Amazon Virtual Private Cloud (Amazon VPC), which allows for network segmentation and control to create a virtual network where you can specify and configure your IPv4 or IPv6 addresses, as well as decide whether or not it is accessible from the public internet, among other things. It is critical that you get everything setup correctly within Amazon VPC to protect your resources. Conformity has several rules that you can use to manually check your own Amazon VPC configuration, or if you start a trial, your entire environment will be automatically scanned for misconfigurations.
Compute protection is about the edge computing resources. To start, you’ll want to have tools for code analysis, as clean code is critical to ensuring there aren’t open doors that cybercriminals or their malicious software (malware) can get through. Once applications/software/operating systems, and the like, are deployed, updates or patches need to be applied as the flaws or bugs are revealed.
- Harden the system. The Center for Internet Security (CIS) and National Institute of Standards and Technology (NIST) provide useful documentation on configurations that are product specific.
- Reduce unused components (applications, software modules, OS packages).
- Automate administrative tasks, using products like AWS Lambda, Amazon Relational Database Service, and Amazon Elastic Container Service (Amazon ECS).
- Validate software integrity by using code signing. Signatures and checksums establish source and integrity of software.
Protecting data at rest and in transit is essential, using methods such as encryption and classification. Data classification is essential to understanding what data we possess and what needs to be done to protect it appropriately. Data and resources can be tagged so the systems can recognize what kind of resource it is, and then utilize service control policies (SCP) to control access utilizing attribute-based access control.
Cryptography can be used to protect data at rest and in transit. Drives, folders, and buckets can be encrypted while resting on AWS servers. As always, it is critical that configurations are done correctly… For example, you must ensure that encryption is enabled for Amazon Athena query results, especially since there are multiple ways to configure this, such as server-side encryption (SSE) or client-side encryption (CSE).
When encrypting data in transit, there are many different configuration options as well. It is not as simple as ensuring that Transport Layer Security is enabled. Controlling certificates is critical here, and there are many things to manage with AWS Certificate Manager.
Key management is essential to cryptography for data in transit and at rest. If done properly, compliance with PCI-DSS, GDPR, and other regulations is supported. One choice to consider is a hardware security module (HSM) or tokenization.
When an adverse event or incident occurs, correct, efficient, and effective responses are essential.
- Educate—Education for our incident response teams and security operations staff is crucial. If they do not understand the cloud, your services, or available information, they will not be able to respond effectively.
- Prepare—Having plans and procedures is critical for effective responses. The teams must understand those plans and know what tools are available to be able to respond.
- Simulate—The saying “practice makes perfect” holds true here. While responses will never be perfect, practice will help to continually improve.
- Iterate—Deconstruct the simulations and build automated responses. This allows incident response to start immediately as an incident occurs, rather than waiting for humans to intervene.
Also known as AppSec, this comprehensive process includes the design, construction, and testing of secure workloads. An effective AppSec program starts with well-trained personnel, an understanding of the security properties of your build and release infrastructure, and the use of automation to identify security issues. Incorporating application security testing into your software development lifecycle (SDLC) and post-release processes can help you identify, rectify, and prevent application security issues from entering the production environment.
In your application development methodology, it is crucial to integrate security controls throughout the entire lifecycle. This includes design, construction, deployment, and operation of workloads. Aligning the process with continuous defect reduction and minimizing technical debt is important. For example, utilizing threat modeling during the design phase enables the early identification of design flaws. This makes it easier and more cost-effective to address compared to later stages. Resolving defects early in the SDLC generally incurs lower costs and complexity. By prioritizing a threat model, you can focus on the right outcomes from the design phase and prevent issues from arising altogether.
As your AppSec program evolves, you can increase the automation-driven testing, enhance feedback to builders, and expedite security reviews. These actions collectively improve the quality of the software and accelerate feature delivery in production.
Implementation guidelines are centered around four key areas:
- Organization and culture
- Security of the pipeline
- Security in the pipeline
- Dependency management
Each area provides a set of principles that can be implemented to achieve an end-to-end perspective in designing, developing, building, deploying, and operating workloads.
Within the AWS ecosystem, various approaches can be employed to address your application security program. Some approaches focus on technological solutions, while others emphasize the human and organizational aspects of AppSec. A balanced combination of both can help establish a robust application security framework. As more security breaches hit the news and data protection has become a key focus, meeting this pillar’s standard should always be in mind. Trend can help you stay compliant to the well-architected framework with its 750+ best practice checks. Learn more by reading the other articles in the series, here are the links: 1) Overview of all 5 pillars 2) Operational excellence 3) Performance efficiency 4) Reliability 5) Cost optimization 6) Sustainability
Read More HERE