You work as a machine learning specialist for a gaming software company.
You have trained and tested a machine learning model to predict gaming users likelihood of buying in-app purchases based on their player characteristics, such as playing time, levels achieved, etc.
You are now ready to deploy your trained model onto the Amazon SageMaker Hosting service. What are the three steps for deploying a model using Amazon SageMaker Hosting Services? (Select THREE)
Click on the arrows to vote for the correct answer
A. B. C. D. E. F.Answers: A, D, E.
Option A is correct.
From the Amazon SageMaker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services “By creating a model, you tell Amazon SageMaker where it can find the model components.
This includes the S3 path where the model artifacts are stored and the Docker registry path for the image that contains the inference code.”
Option B is incorrect.
The Amazon SageMaker Hosting Service expects to find the inference code in a Docker container, not in Kubernetes.
(See the Amazon SageMaker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services)
Option C is incorrect.
The Amazon SageMaker Hosting Service uses an HTTPS endpoint (not a REST endpoint) to provide inferences from the model.
(See the Amazon SageMaker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services)
Option D is correct.
The Amazon SageMaker Hosting Service uses an HTTPS endpoint to provide inferences from the model.
This endpoint is configured to provide models to launch and instances on which to run them.
(See the Amazon SageMaker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services)
Option E is correct.The Amazon SageMaker Hosting Service uses an HTTPS endpoint to provide inferences from the model.
Client applications send requests to the SageMaker runtime HTTPS endpoint to get inferences, in your case, to get inferences on the probability that a gamer will buy in-app purchases.
Option F is incorrect.The Amazon SageMaker Hosting Service uses an HTTPS endpoint (not a REST endpoint) to provide inferences from the model.
Reference:
Please see the Amazon SageMaker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services to overview the deployment of a SageMaker model.
Sure, I'd be happy to explain the steps for deploying a machine learning model using Amazon SageMaker Hosting services.
Step 1: Create a model in Amazon SageMaker The first step is to create a model in Amazon SageMaker. This involves specifying the S3 path where the model artifacts are stored and the Docker registry path or Kubernetes registry path for the inference image. This can be done using the Amazon SageMaker console or the AWS SDKs.
Step 2: Create an endpoint configuration The next step is to create an endpoint configuration for the endpoint you want to deploy your model to. This configuration specifies the type of endpoint you want to create, such as a REST or HTTPS endpoint, and other settings such as the instance type and number of instances to use for hosting the model.
Step 3: Create an endpoint The final step is to create the actual endpoint using the endpoint configuration. This will spin up the necessary infrastructure to host the model and make it available for inference requests. Once the endpoint is created, you can start sending requests to it to get predictions from your trained machine learning model.
To summarize, the three steps for deploying a machine learning model using Amazon SageMaker Hosting services are:
Note that the answer options provided in the question include some incorrect options. Specifically, option B is incorrect because Amazon SageMaker does not support deploying inference images to a Kubernetes registry. Option D is also incorrect because there is no such thing as an HTTPS endpoint configuration - HTTPS is a protocol that can be used with REST endpoints.