You work for a manufacturing firm that is attempting to build a rechargeable battery that has a capacity multiple times greater than the current rechargeable batteries on the market.
As a machine learning specialist on the team responsible for building a machine learning model that can predict the chemical component interaction that maximizes battery capacity, you have decided that none of the built-in algorithms available in SageMaker fit your problem, as well as you would like.
So you and your team have decided to create your own SageMaker algorithm resource.
You'll use this custom algorithm to train and run inferences on your model. Which of the following steps do you NOT need to complete to create your custom algorithm for use in SageMaker?
Click on the arrows to vote for the correct answer
A. B. C. D. E.Answer: E.
Option A is incorrect.
SageMaker uses Docker containers for your custom algorithm training and hosting your algorithm.
Option B is incorrect.
When you create a custom algorithm resource in SageMaker, you need to specify the hyperparameters your algorithm will support.
Option C is incorrect.
When you create a custom algorithm resource in SageMaker, you need to specify the metrics that your algorithm will send to CloudWatch when running your training jobs.
Option D is incorrect.
When you create a custom algorithm resource in SageMaker, you need to specify the EC2 instance types your algorithm supports for training and inference.
Option E is correct.
When you create a custom algorithm resource in SageMaker, you need to specify whether it supports distributed training across multiple instances, not distributed inference.
Reference:
Please see the Amazon SageMaker developer guide titled Use Your Own Algorithms or Models with Amazon SageMaker, and the Amazon SageMaker developer guide titled Create an Algorithm Resource.
To create a custom algorithm for use in SageMaker, there are several steps that you and your team need to complete, but not all are necessary for this specific task.
Here are the steps and their explanations:
A. Create Docker containers for your training and inference code.
Creating Docker containers for your training and inference code is necessary when building a custom algorithm for use in SageMaker. Docker containers allow for consistent and repeatable deployment of the training and inference code across different environments.
B. Specify the hyperparameters that your algorithm supports.
Specifying the hyperparameters that your algorithm supports is necessary when building a custom algorithm for use in SageMaker. Hyperparameters are the parameters that are set prior to the start of the training process and can affect the performance of the resulting model.
C. Specify the metrics that your algorithm sends to CloudWatch when training.
Specifying the metrics that your algorithm sends to CloudWatch when training is not necessary, but it is a best practice. CloudWatch is a monitoring service provided by AWS, and it allows you to monitor and analyze the logs and metrics generated by your custom algorithm during the training process.
D. The instance types your algorithm supports for training and inference.
Specifying the instance types your algorithm supports for training and inference is necessary when building a custom algorithm for use in SageMaker. Different instance types have different computing capabilities, and the choice of instance type can affect the performance and cost of the training and inference process.
E. Whether your algorithm supports distributed inference across multiple instances.
Specifying whether your algorithm supports distributed inference across multiple instances is necessary when building a custom algorithm for use in SageMaker. Distributed inference allows for faster and more efficient inference by spreading the workload across multiple instances.
Therefore, the answer to the question is C. You do not need to specify the metrics that your algorithm sends to CloudWatch when training, but it is still a best practice to do so.