Designing and Implementing a Data Science Solution on Azure: Selecting Compute Targets for Azure ML Services and Tools

Compute Targets for Azure ML Services and Tools

Question

When working with Azure ML services and tools, you have several options to select the execution environment (compute targets) for the experiments.

Some compute targets are appropriate for development and testing at low cost, while others, being high-performance and scalable engines, can easily generate unexpected bills, if not used properly.

Which two of the following recommendations should you consider when selecting compute targets?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Answers: B and D.

Option A is incorrect becauseACI is suitable for small models (<1GB in size), and the number of models also limited, and according to the recommendation, the RAM requirement should be under 48GB.

For highscale, real-time inferencing AKS should be used.

Option B is CORRECT because ACI is suitable for small models (<1GB in size), and the number of models also limited, and according to the recommendation, the RAM requirement should be under 48GB.Option C is incorrect because local computes are only recommended for low-cost debugging tasks; not recommended for training scenarios which typically require high compute capacity with autoscaling.

Option D is CORRECT because AKS is the compute target specifically designed for heavy production workloads.

With its sophisticated containerized runtime infrastructure, AKS provides the fast response and scalability of the deployed services.

Option E is incorrect because AKS is the compute target specifically designed for heavy, high-scale production workloads.

Due to its very high capacity and expensiveness, it is recommended for PROD targets only.

Reference:

When selecting compute targets in Azure ML services and tools, it is essential to consider the cost, scalability, and performance requirements of the project. Here are two recommendations to keep in mind:

B. Use Azure Container Instances for low-scale, testing scenarios requiring <48 GB RAM: Azure Container Instances (ACI) is a good option for low-scale testing scenarios that require less than 48 GB of RAM. ACI is a serverless solution that allows you to run containers without managing the underlying infrastructure. ACI is ideal for testing scenarios as it is cost-effective and you pay only for the seconds of usage. Also, ACI allows you to quickly deploy and test containerized applications, making it a good choice for iterative development and testing.

C. Consider using local computes for low-scale, low-cost training tasks: Using a local compute is a good option for low-scale and low-cost training tasks. If you have a machine with enough processing power and memory, it can be cost-effective to use it for local training tasks. You can also leverage Azure ML's SDK to manage the training process and remotely monitor the training progress. However, using local computes can be challenging when scaling up and may not be suitable for high-performance computing.

Therefore, options A, D, and E may not be the best recommendations as they involve high-performance and scalable engines that can quickly generate unexpected bills if not used properly. Azure Kubernetes Service (AKS) and Azure Container Instances (ACI) are suitable for high-scale, real-time inferencing in production environments, but not for low-scale testing scenarios or low-cost training tasks. Therefore, it is essential to carefully evaluate the requirements of the project and select the compute target that best meets those requirements.