Your organization is planning to shift one of the high-performance data analytics applications running on Linux servers purchased from the 3rd party vendor to the AWS.
Currently, the application works in an on-premises load balancer, and all the data is stored in a very large shared file system for low-latency and high throughput purposes.
The management wants minimal disruption to existing service and also wants to do stepwise migration for easy rollback.
Please select 3 valid options from below.
Click on the arrows to vote for the correct answer
A. B. C. D. E.Correct Answer: C, D, E.
Options C, D, and E are correct because network extension via VPN or Direct Connect will allow the on-premises instances to use the AWS resources like EFS.
EFS is elastic file storage that can be mounted on EC2 and other instances.
It is inherently durable and scalable.
EFS stores the data by default at multiple availability zones.
With Route 53 Weighted policy, the requests can be distributed to on-premise and AWS resources easily in a controlled manner.
Option A is INCORRECT because S3 will work as shared, durable storage.
But it may not be a suitable choice for low-latency, high throughput load processing.
As the application cannot be easily modified, presenting the S3 as a local file system will be another task and has to be done via File Storage Gateway.
Option B is INCORRECT because the purpose is to use a shared file system solution (EFS)
RAID1 for EBS is not necessary as the application requires data from EFS rather than the local storage.
The organization wants to migrate a high-performance data analytics application from an on-premises environment to AWS while minimizing disruption and ensuring easy rollback. Here are the explanations for the three valid options:
Option A: Save all the data on S3 and use it as shared storage. Use an application load balancer with EC2 instances to share the processing load.
This option involves storing all the data in Amazon S3, which can be accessed by multiple EC2 instances that are load balanced using an application load balancer. This allows for easy scaling of compute resources as the processing load increases. Additionally, using S3 for storage provides durability and availability guarantees, and can reduce the need for expensive, high-performance storage options.
Option B: Create a RAID 1 storage using EBS and run the application on EC2 with application-level load balancers to share the processing load.
This option involves creating a RAID 1 storage using Elastic Block Store (EBS) volumes, which provide high-performance, low-latency block-level storage for EC2 instances. The EC2 instances can be load balanced using application-level load balancers, which can distribute the processing load across multiple instances. This option provides high-performance storage and scalable compute resources while maintaining compatibility with the current application architecture.
Option C: Use the VPN or Direct Connect to create a link between your company premise and AWS regional data center.
This option involves using a VPN or Direct Connect to create a secure, high-speed connection between the organization's on-premises environment and the AWS regional data center. This allows for easy migration of data and applications between the two environments, while maintaining network and data security. This option can also provide low-latency access to data stored in AWS, which is important for high-performance data analytics applications.
Option D and E are not valid options for this scenario:
Option D: Create an EFS with provisioned throughput and share the storage between your on-premise instances and EC2 instances.
This option involves using Amazon Elastic File System (EFS) to create a shared file system that can be accessed by both on-premises instances and EC2 instances. While this option provides shared storage, it may not provide the necessary performance for high-performance data analytics applications, which typically require low-latency access to data.
Option E: Setup a Route 53 record to distribute the load between on-premises and AWS load balancer with the weighted routing policy.
This option involves using Route 53 to distribute traffic between on-premises and AWS load balancers using the weighted routing policy. While this option can provide load balancing between on-premises and AWS environments, it may not provide the necessary performance for high-performance data analytics applications, which typically require low-latency access to data. Additionally, this option may introduce additional complexity to the architecture, which can increase the risk of disruption during migration.