Provisioning Models for AWS Batch Jobs with AWS Fargate

Provisioning Models for AWS Batch Jobs with AWS Fargate

Prev Question Next Question

Question

You are an AWS Solutions Architect.

A company owns a large number of batch processing workloads in a local data center and plans to migrate these jobs in AWS Batch.

The company needs your help to decide which provisioning model should be chosen in AWS Batch jobs.

Which of the following situations would you use for AWS Fargate in AWS Batch? (Select TWO)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Correct Answers: B and D.

Option A is incorrect.

Because, for this option, the EC2 instance is more suitable as users can choose the instance configurations.

Option B is CORRECT.

Because with Fargate, jobs can start faster at the initial scale-out of work as users do not need to wait for EC2 instances to launch.

Option C is incorrect.

Because the EC2 instance is more suitable as users need to customize the AMI and instance type.

Users cannot do this with Fargate.

Option D is CORRECT.

Because Fargate manages the servers and clusters of EC2 instances for AWS Batch jobs.

Users do not need to worry about the details of compute resources.

Option E is incorrect.

Because with Fargate, users cannot select instance types or the minimum number of vCPUs.

References:

https://docs.aws.amazon.com/batch/latest/userguide/fargate.html https://aws.amazon.com/batch/faqs/

AWS Batch is a fully managed service that enables you to run batch computing workloads on the AWS Cloud. AWS Batch provisions and optimizes the infrastructure required to execute batch computing workloads. It also enables you to use compute resources efficiently and automatically scale resources as your workload demands change.

When deciding which provisioning model to choose for AWS Batch jobs, you need to consider various factors such as job requirements, workload size, and resource utilization. The following are the two situations where you would use AWS Fargate in AWS Batch:

A. When a batch job needs access to particular instance configurations, including processors and GPUs: In this scenario, AWS Fargate would be a good option because it enables you to specify the exact amount of CPU and memory resources required for the job. Additionally, Fargate provides the flexibility to add GPU resources if required, which makes it an ideal choice for workloads that need to leverage specialized hardware.

E. When a batch job needs the instance type to be C5 with multiple vCPUs: AWS Fargate offers a wide range of instance types with varying amounts of CPU, memory, and network resources. If the batch job requires a specific instance type, such as C5 with multiple vCPUs, Fargate can provision that instance type automatically. This ensures that the batch job has the necessary resources to run efficiently.

The other options in the question are not suitable for AWS Fargate provisioning in AWS Batch. For example:

B. When a batch job needs to be started fast at the initial scale-out of work: While AWS Fargate does provide quick start times for containers, it may not be the ideal choice for batch jobs that need to scale quickly. Instead, AWS Batch can be used to quickly provision resources to meet the workload demand.

C. When a batch job needs to use a compute environment based on a custom AMI and the instance type must be A1: AWS Fargate does not support custom AMIs or A1 instance types. Therefore, this scenario would not be a good fit for Fargate provisioning.

D. When users do not want to provision or scale clusters of virtual machines to run containers for AWS Batch jobs: AWS Fargate provides an easy way to run containers without having to manage the underlying infrastructure. However, if users require more control over the infrastructure or need to run jobs that require specialized hardware or custom AMIs, AWS Batch may be a better choice.