Your organization is planning to shift one of the high-performance data analytics applications running on Linux servers purchased from the 3rd party vendor to the AWS.
Currently, the application works in an on-premises load balancer, and all the data is stored in a very large shared file system for low-latency and high throughput purposes.
The management wants minimal disruption to existing service and also wants to do stepwise migration for easy rollback.
Please select 3 valid options from below.
Click on the arrows to vote for the correct answer
A. B. C. D. E.Correct Answer: C, D, E.
Options C, D, and E are correct because network extension via VPN or Direct Connect will allow the on-premises instances to use the AWS resources like EFS.
EFS is elastic file storage that can be mounted on EC2 and other instances.
It is inherently durable and scalable.
EFS stores the data by default at multiple availability zones.
With Route 53 Weighted policy, the requests can be distributed to on-premise and AWS resources easily in a controlled manner.
Option A is INCORRECT because S3 will work as shared, durable storage.
But it may not be a suitable choice for low-latency, high throughput load processing.
As the application cannot be easily modified, presenting the S3 as a local file system will be another task and has to be done via File Storage Gateway.
Option B is INCORRECT because the purpose is to use a shared file system solution (EFS)
RAID1 for EBS is not necessary as the application requires data from EFS rather than the local storage.
Option A: Saving all the data on S3 and using it as shared storage and using an application load balancer with EC2 instances to share the processing load is a valid option. However, it may not be the best option for the given scenario as S3 is not a file system, and accessing data directly from S3 can result in high latencies. Also, it may not be cost-effective to store and access large amounts of data from S3.
Option B: Creating a RAID 1 storage using EBS and running the application on EC2 with application-level load balancers to share the processing load is a valid option. EBS provides durable block-level storage, and RAID 1 offers data redundancy. Also, using an application-level load balancer, such as ELB or ALB, can help distribute traffic across EC2 instances. However, the drawback of this option is that EBS may not provide the same level of low-latency and high throughput as the current on-premises shared file system.
Option C: Using VPN or Direct Connect to create a link between the company premise and AWS regional data center is a valid option. This option allows the organization to establish a secure, private connection between the on-premises data center and the AWS infrastructure. This option enables the organization to migrate data and applications to the cloud while minimizing disruptions to existing services.
Option D: Creating an EFS with provisioned throughput and sharing the storage between on-premise instances and EC2 instances is a valid option. EFS provides a managed, elastic file system that is accessible from multiple EC2 instances and can scale automatically as the storage needs grow. By using EFS with provisioned throughput, the organization can ensure low-latency and high throughput for the application. However, this option may not be suitable if the application requires a specific file system, such as NFS or SMB.
Option E: Setting up a Route 53 record to distribute the load between on-premises and AWS load balancer with the weighted routing policy is a valid option. Route 53 is a highly available and scalable DNS service that can route traffic to multiple resources based on different routing policies. The weighted routing policy can distribute the traffic between the on-premises load balancer and the AWS load balancer based on a set percentage. However, this option may not be the best option if the organization wants to minimize disruptions to the existing services.
In conclusion, the most suitable options for the given scenario are B, C, and D, as they can provide low-latency and high-throughput storage, secure connectivity, and scalable storage while minimizing disruptions to existing services.