AWS Certified SysOps Administrator - Associate Exam Question: Scaling Web Application Traffic Distribution

Distributing Web Application Traffic across EC2 Instances | AWS Certification Exam SOA-C02

Question

Your company is planning to host a web application on a set of EC2 Instances.

Based on the initial response, it has now been decided to add a service that would help distribute the traffic across a set of EC2 Instances hosting the application.

The main requirements are that the service should be able to scale to a million requests per second.

Which of the following would you implement for this requirement?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer - B.

The AWS Documentation mentions the following.

Using a Network Load Balancer instead of a Classic Load Balancer has the following benefits.

· Ability to handle volatile workloads and scale to millions of requests per second.

· Support for static IP addresses for the load balancer.

You can also assign one Elastic IP address per subnet enabled for the load balancer.

· Support for registering targets by IP address, including targets outside the VPC for the load balancer.

Option A is incorrect since there is no specific mention in the question on Docker containers.

Option C is incorrect since you need a network load balancer for higher throughput of requests.

Option D is incorrect since there is no mention of the need to distribute messages between components of the application.

For more information on the network load balancer, please refer to the below URL-

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

The best option to fulfill the given requirements of scaling to a million requests per second for distributing traffic across a set of EC2 Instances is to use a Network Load Balancer (NLB).

A Network Load Balancer (NLB) is an AWS service that can be used to distribute incoming traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses, in one or more Availability Zones. It provides high throughput, ultra-low latency, and supports millions of requests per second.

Compared to Classic Load Balancer (CLB), NLB has the advantage of higher throughput, lower latency, and the ability to handle volatile traffic patterns, making it a better choice for high-performance applications like gaming, media streaming, and machine learning. Unlike CLB, NLB operates at the network layer (Layer 4), enabling it to handle TCP and UDP traffic, and it does not terminate SSL/TLS connections. NLB supports both Internet-facing and internal applications.

Amazon Elastic Container Service (ECS) is a container management service that makes it easy to run, stop, and manage Docker containers on a cluster. While ECS can also be used to manage and scale containerized applications, it is not an appropriate choice for managing traffic across EC2 instances.

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables decoupling and scaling of microservices, distributed systems, and serverless applications. While SQS can be used to store and manage messages between distributed application components, it is not designed to distribute traffic across EC2 instances.

In summary, for the given requirement of distributing traffic across a set of EC2 instances hosting a web application and scaling to a million requests per second, the best option is to use a Network Load Balancer.