Well-Architected Framework: Sustainability
Designing and implementing your business workloads in a public cloud platform such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) can be a challenge. Though the marketing pitch makes it sound as if it is straightforward to deploy and migrate workloads to the cloud, it nonetheless requires a thorough understanding of several factors. This includes how that cloud environment is configured, the platform’s architecture, what services are available, and how you, as a cloud consumer, can work with them.
Over the years, cloud vendors have added support, documentation, and reference materials to make cloud migration simpler. One example of this reference material is the AWS Well-Architected Framework (WAF), which helps cloud solution architects, CTOs, developers, and operations teams understand best practices for architecting scalable AWS applications. The WAF ensures your AWS applications can handle business-critical workloads.
From a high level, the WAF touches on six pillars:
- Operation excellence
- Security
- Reliability
- Performance efficiency
- Cost optimization
- Sustainability
Here, we focus on the sustainability pillar. Explore the rest of the Well-Architected Framework series if you’re interested in learning more about the other WAF pillars.
AWS sustainability pillar
Focusing on the environmental impact of cloud applications, this pillar includes the overall effect on the economy and its potential impact on society. Customers are encouraged to choose more sustainable applications with fewer negative consequences for future generations.
Following the flow and structure of the existing Trend Micro Guide to the Well-Architected Framework, the following sections highlight the six design principles of cloud sustainability.
- Measure impactYou can’t predict the future effect of your cloud workloads without measuring where they are today. To get an accurate picture, you need to examine where your different types of cloud infrastructure and decide what can be decommissioned over time to minimize environmental impact.
For example, energy-efficient servers have a short shelf life and can be deemed inefficient as soon as three to four years from now. Consider creating a plan to migrate to more energy-economical hardware as it becomes available.
Measuring your application’s resource consumption also helps you spot areas in which to improve. For example, you may have used a lift-and-shift strategy to move an enterprise application to the cloud before realizing that the average CPU utilization is only 20%. This low utilization rate might make the app a prime candidate for migration to a mix of managed services and serverless functions. However, this would mean that your application only runs when and where you need it.
Additionally, consider data storage and transfer. Collecting information you don’t need to keep hurts sustainability due to the additional power required to store this superfluous data. You should also avoid sending more data between apps and APIs than necessary. The impact of millions of applications sending a few additional bytes across the internet billions of times daily adds up.
After gathering metrics and evaluating possible outcomes, you should set achievable key performance indicators (KPIs) to help you continuously optimize productivity while reducing environmental impact.
- Establish long-term goalsOnce you’ve started measuring your data, it becomes easier to set long-term goals. Sustainability is not a short-term fix, as it requires contributions from organizations across the globe.
A comparable data center required a more robust cooling system, more electricity and physical racks ten years ago than it does today. Compute efficiency is now much higher, allowing you to fit more computing power into the same amount of space or downsize to a smaller data center, requiring less cooling and electricity.
This enables you to set long-term goals pertaining to lower resource consumption, even as your computing needs increase. You can start by estimating the mid- and long-term needs for compute power and predicting how this will impact sustainability. According to science and technology journal, Nature, the best strategy for organizations is to move to hyperscale data centers run by large cloud providers, which are more efficient and sustainable.
- Maximize utilizationAfter establishing long-term goals for your data center footprint, look for quick wins to maximize utilization. Cloud monitoring and automation can identify resources that are still running but unused and needlessly consuming energy. Analyze your resource use, downsize virtual machines (VMs) where you can, and shut down any machines you aren’t using.
Technologies like hypervisors and containers can extend this benefit further by helping you maximize the utilization rate of your VMs and server hardware. Migrating from heavy VMs to more efficient and optimized containerized workloads and microservices often results in an 85 to 90% compute efficiency for the underlying server, storage, and networking components. This migration enables you to maximize your resource consumption and lighten your footprint.
As a concrete example, a service like Amazon CloudWatch can help you visualize and analyze your cloud workloads, giving you a thorough understanding of to what capacity you are using your current infrastructure while helping you identify areas for improvement.
- Continuously look for efficiency optimizationTechnology is changing rapidly, and this trend will only continue. From a design and architectural perspective, you should avoid designing an architecture that will remain static for years.
Instead, follow the core concepts of the WAF, which can allow you to build more flexible solutions. Considering each of its pillars when designing your solution will enable you build more reliable, efficient, and resilient cloud applications.
For example, when you make changes to address the security pillar, you will likely find an increase in efficiency. This is because you’ll be forced to re-evaluate older, less-efficient parts of your application architecture continually.
Look for emerging trends and how they might help you build more efficient applications. As many development teams resisted the move to containers, those who have embraced containerization have found they can run more workloads on fewer servers, leading to more sustainable applications—and reduced costs.
Look for service providers who embrace sustainable innovation. This includes wind-powered and ocean-cooled data centers, some of which even use the waste from servers to heat local homes.
- Use shared managed servicesMoving your workloads to large-scale managed cloud services can result in major steps toward sustainability. Cloud providers typically achieve economies of scale that most organizations can’t achieve.
Shared cloud services like serverless functions help cloud providers accomplish more computing on fewer machines, which means fewer servers are sitting idle and consuming electricity.
Further, your customers and partners are likely running their services in the cloud as well. When they connect to your cloud resources from within the same data centers, you’ll notice faster, more stable connectivity to your workloads, consuming fewer network resources and reducing the need to duplicate data.
Finally, consider the benefits realized by using other shared services, like serverless databases, that scale down to zero when not used. Unless your company is the size of a cloud provider, it’s unlikely you can build a serverless database in a cost-efficient manner. But in the cloud, you can sign up for a shared, managed database service like Amazon Aurora Serverless or Microsoft Azure SQL Database and realize significant efficiency gains.
- Work towards energy reductionWhile maximizing server utilization and eliminating unused resources is an excellent start, it’s not necessarily enough. It is important to consider taking a holistic approach to reduce energy consumption and environmental impact.
Consider what happens when a popular mobile app updates to a new version that is incompatible with older devices. Many users feel compelled to upgrade to a new device. Now, imagine the energy impact of all those new devices—from mining and transporting and refining the raw materials to manufacturing.
Upgrades that add new features but degrade performance should be considered through a holistic lens. Your applications can have a sustainability impact beyond the electricity that powers your servers. Use engineering solutions like device farms to run tests and better understand the expected or actual impact of changes you’d like to undergo.
While natural energy sources like wind, water, and solar can lower technology’s ecological and energy impact, the most effective method for saving energy is to avoid expending it.
Conclusion
AWS published the WAF guidelines and principles to help customers architect cloud solutions with reliability, security, performance cost, and scalability in mind. The framework was recently extended with the sustainability pillar, recognizing the importance of limiting energy consumption. This new pillar emphasizes the importance of energy and the environmentally conscious management of AWS data centers. It asks customers to consider their cloud designs’ mid- and longer-term energy efficiency.
In adding the sustainability pillar, the WAF acknowledges that software does not exist in isolation but instead impacts the world around it. By migrating and deploying workloads to the cloud, the world becomes a more sustainable environment: energy-friendly, environmentally friendly, and a viable haven for generations to come.
Read More HERE