AWS Disaster Recovery Cost Control | Best Tools for Budget-friendly Solutions

AWS Disaster Recovery Cost Control

Prev Question Next Question

Question

A startup IT company just migrated its most important web application from on-premises to AWS.

There is a strong need to design a disaster recovery system in AWS as soon as possible otherwise any outage may result in a huge impact on the company's reputation and cash flow.

However, the company is running out of budget and has to control any operation cost.

Which AWS tools can help the company to control the cost while designing a disaster recovery system? (Select TWO.)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Correct Answer - B, E.

While considering the services to lower down the cost, it is also necessary to make sure that the system can recover as normal and meet the given RPO and RTO.

Option A is incorrect: Because spot instances are unsuitable in the production environment no matter it is in a live system or backup system.

Option B is CORRECT: Because Auto Scaling Group helps on adjusting the number of running instances to save cost.

Option C is incorrect: Because the EC2 instances in the standby system should not be terminated even if their utilization rate is low.

For a hot standby system, the instances in the standby should always run.

Option D is incorrect: Because it is unsafe to move all data to Glacier as it may cause issues if the backup is needed under a short RTO.

Option E is CORRECT: Because Amazon Data Lifecycle Manager helps to manage EBS snapshots efficiently.

Old snapshots can be deleted to save costs.

Refer Section "Automating the Amazon EBS snapshot lifecycle" on page 1055 in the link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-ug.pdf#AmazonEBS.

When designing a disaster recovery system in AWS, it is important to balance the cost of the system with the level of protection it provides. Here are two AWS tools that can help control cost while designing a disaster recovery system:

  1. Use a suitable Auto Scaling Group to control the number of running instances: Auto Scaling is a powerful tool that can help to minimize costs while ensuring high availability of the application. By creating an Auto Scaling Group (ASG) and setting appropriate scaling policies, the company can automatically adjust the number of instances running in the disaster recovery environment based on the actual demand. This ensures that the company pays only for the necessary resources during normal operation, and can quickly scale up in the event of a disaster. By using ASG, the company can also choose to use Spot instances, which are available at a much lower cost than On-Demand instances, to reduce costs further.

  2. Use Amazon Data Lifecycle Manager to delete old EBS snapshots: Elastic Block Store (EBS) snapshots are a commonly used method to backup data in AWS. However, keeping snapshots for extended periods can be costly, as it incurs storage costs. Amazon Data Lifecycle Manager (DLM) can be used to automate the process of deleting old snapshots. By setting up a DLM policy, the company can delete snapshots that are no longer required, reducing storage costs. The policy can be based on various criteria such as age, number of snapshots, or the tag attached to the snapshot.

Option A, creating more spot instances in the hot standby system, may be a good option to save cost, but it may compromise the availability and durability of the disaster recovery system.

Option C, using Trusted Advisor to monitor EC2 instances with low utilization rate in the standby system and terminate them to save costs, may cause a negative impact on the availability of the disaster recovery system since the standby system needs to have all instances running to be ready for the failover.

Option D, using an S3 lifecycle policy to move data from S3 to Glacier, can save costs on storage, but it may impact the RTO (Recovery Time Objective) and RPO (Recovery Point Objective) of the system since Glacier has a longer retrieval time than S3.

In summary, using Auto Scaling Group and Amazon Data Lifecycle Manager can help the company control the cost while designing a disaster recovery system.