You work as a machine learning specialist for a major oil refinery company.
Your company needs to do complex analysis on its crude and oil chemical compound structures.
You have selected an algorithm for your machine learning model that is not one of the SageMaker built-in algorithms.
You have created your model using CreateModel, and you have created your HTTPS endpoint.
Your docker container running your model is now ready to receive inference requests for real-time inferences.
When SageMaker returns the inference result from a client's request, which of the following are true? (Select THREE)
Click on the arrows to vote for the correct answer
A. B. C. D. E. F.Answers: A, D, F.
Option A is correct.
From the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services, “To receive inference requests, the container must have a web server listening on port 8080”.
Option B is incorrect.
The inference container must accept POST requests to the /invocations endpoint.
(See the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services)
Option C is incorrect.
The inference container must accept POST requests to the /invocations endpoint.
(See the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services)Maker developer guide titled Deploy a Model on Amazon SageMaker Hosting Services)
Option D is correct.
From the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services, “Amazon SageMaker strips all POST headers except those supported by InvokeEndpoint.
Amazon SageMaker might add additional headers.
Inference containers must be able to ignore these additional headers safely.”
Option E is incorrect.The inference container must accept POST requests to the /invocations endpoint.
(See the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services)
Option F is correct.The inference container must accept POST requests to the /invocations endpoint.
(See the Amazon SageMaker developer guide titled Use You Own Inference Code with Hosting Services)
Reference:
Please see the Amazon SageMaker developer guide titled Deploy a Model.
When SageMaker returns the inference result from a client's request, there are several requirements that your inference container must meet. Let's go through each answer choice one by one to determine which are true:
A. To receive inference requests your inference container must have a web server running on port 8080.
This statement is partially true. While it is true that your inference container must have a web server running to receive requests, it doesn't necessarily have to be on port 8080. In fact, you can configure the port number in your container's code, and then specify the correct port number when you create the model and endpoint. So this answer is not entirely correct.
B. Your inference container must accept GET requests to the /invocations endpoint.
This statement is false. Your inference container must accept POST requests, not GET requests, to the /invocations endpoint. GET requests are used for retrieving data, while POST requests are used for submitting data to a server.
C. Your inference container must accept PUT requests to the /inferences endpoint.
This statement is false. There is no /inferences endpoint in SageMaker. The correct endpoint for sending inference requests is /invocations.
D. Amazon SageMaker strips all POST headers except those supported by InvokeEndpoint. Amazon SageMaker might add additional headers. Your inference container must be able to ignore these additional headers safely.
This statement is true. When you send a POST request to the /invocations endpoint, SageMaker will strip out any headers that are not supported by the InvokeEndpoint API. Additionally, SageMaker might add some additional headers to the request, which your container must be able to ignore safely.
E. Your inference container must accept POST requests to the /inferences endpoint.
This statement is false. As mentioned before, there is no /inferences endpoint in SageMaker. The correct endpoint for sending inference requests is /invocations.
F. Your inference container must accept POST requests to the /invocations endpoint.
This statement is true. To receive inference requests, your inference container must be able to handle POST requests to the /invocations endpoint. When SageMaker sends a POST request to this endpoint, it expects your container to respond with a JSON object containing the inference results.
In summary, the correct answers are D, and F. Answer A is partially correct, but not entirely accurate, while answers B, C, and E are false.