You work for a company that manufactures cell phone peripherals such as bluetooth headphones and bluetooth selfie sticks.
Your company has designed its products to act as IoT devices that send usage and diagnostic MQTT messages to your IoT Core service running in AWS.
Your machine learning team wants to use this IoT message data to run inferences through your machine learning inference end-point.
However, the IoT data is unstructured.
So you need to preprocess the data by performing feature engineering on the observations before they are fed into your inference endpoint. You have decided to use a SageMaker Inference Pipeline to construct this machine learning solution.
As you define the containers for your pipeline, one for feature engineering pre-processing, and one for inference predictions, which SageMaker CLI command and which parameter on that command do you need to run using the SageMaker CLI to build your inference pipeline?
Click on the arrows to vote for the correct answer
A. B. C. D. E.Answer: D.
Option A is incorrect.
The SageMaker CLI CreateModel command is the correct command.
But EndpointArn is a response element of the UpdateEndpoint command.
Option B is incorrect.
The SageMaker CLI UpdateEndpoint command is used to switch from an existing endpoint to a new endpoint.
You would not use this command to create a new inference pipeline container.
Option C is incorrect.
The SageMaker CLI CreateModel command is the correct command.
But you use the thePrimaryContainer request parameter when you want to create a single container, not when you want to create an inference pipeline.
Option D is correct.
The SageMaker CLI CreateModel command is the correct command, and the Containers parameter is the correct parameter.
The Containers request parameter is used to set the containers that make up your pipeline.
Option E is incorrect.
The SageMaker CLI UpdateEndpoint command is used to switch from an existing endpoint to a new endpoint.
You would not use this command to create a new inference pipeline container.
Reference:
Please see the Amazon SageMaker developer guide titled Deploy an Inference Pipeline, the Amazon SageMaker developer guide titled CreateModel, the Amazon SageMaker developer guide titled UpdateEndpoint, the AWS CLI Command Reference titled create-model, and the AWS IoT Core overview page.
The correct answer is D. CreateModel command with the Containers request parameter.
To build a SageMaker Inference Pipeline, we need to define the containers for each step in the pipeline. The pipeline can consist of one or more containers, each representing a processing step. In this case, we need to define two containers, one for feature engineering pre-processing, and one for inference predictions.
To create a model with a pipeline, we can use the SageMaker CLI and run the CreateModel command with the Containers request parameter. The Containers request parameter specifies an array of container definitions that describe the processing steps in the pipeline.
The container definition is a JSON object that includes the following properties:
In our case, we need to define two container definitions, one for feature engineering and one for inference prediction. The container definition for feature engineering should include the necessary libraries and code to preprocess the data. The container definition for inference prediction should include the necessary libraries and code to load the model and make predictions.
Once we have defined the containers, we can create the SageMaker Inference Pipeline by passing the container definitions to the Containers request parameter of the CreateModel command.
In summary, to build a SageMaker Inference Pipeline with containers for feature engineering and inference predictions, we need to use the SageMaker CLI and run the CreateModel command with the Containers request parameter.