Model-Driven Clustering for IoT Data Analytics | AWS Certified Machine Learning - Specialty Exam | Amazon

Transform Steps and Code Handling in Inference Requests - AWS Certified Machine Learning - Specialty Exam | Amazon

Question

You work for an Internet of Things (IoT) component manufacturer which builds servos, engines, sensors, etc.

The IoT devices transmit usage and environment information back to AWS IoT Core via the MQTT protocol.

You want to use a machine learning model to show how/where the use of your products is clustered in various regions around the world.

This information will help your data scientists build KPI dashboards to improve your component engineering quality and performance.

You have created, trained, and deployed to Amazon SageMaker Hosting Services your model based on the XGBoost algorithm.

Your model is set up to receive inference requests from a lambda function that is triggered by the receipt of an IoT Core MQTT message via your Kinesis Data Streams instance. What transform steps need to be done for each inference request? Also, which steps are handled by your code versus by the inference algorithm? (Select TWO)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Answers: B, D.

Option A is incorrect.

The inference request serialization must be completed by your lambda code.

The algorithm needs to receive the inference request in serialized form.

Option B is correct.

The inference request serialization must be completed by your lambda code.

Option C is incorrect.

The inference request is deserialized by the algorithm in response to the inference request.

Your lambda code is responsible for serializing the inference request.

Option D is correct.

The inference request is deserialized by the algorithm in response to the inference request.

Option E is incorrect.

There is no inference request post serialization step in the SageMaker inference request/response process.

Reference:

Please see the Amazon SageMaker developer guide titled Common Data Formats for Inference, the AWS IoT Core overview page, the AWS IoT developer guide titled Creating an AWS Lambda Rule.

When sending an inference request to a machine learning model, the request must be serialized in a format that the model can understand. The model then deserializes the request, performs inference on the input data, and returns the result to the caller.

In this scenario, the inference requests are triggered by an IoT Core MQTT message via a Kinesis Data Streams instance, and the requests are sent to an Amazon SageMaker hosting service that uses an XGBoost algorithm to perform inference.

The following are the transform steps that need to be done for each inference request:

  1. Inference request serialization: The input data needs to be converted into a format that the XGBoost algorithm can process. This step can be handled either by the lambda function code or by the algorithm itself.

  2. Inference request deserialization: Once the inference request has been processed by the model, the output needs to be converted into a format that the caller can understand. This step can also be handled either by the lambda function code or by the algorithm.

Based on the given options, the correct answers are:

B. Inference request serialization (handled by your lambda code) C. Inference request deserialization (handled by your lambda code)

In this scenario, the lambda function is responsible for triggering the inference requests, so it should also handle the serialization and deserialization of the inference request data. The XGBoost algorithm will handle the processing of the data and the generation of the inference output.

Therefore, the code will need to handle the serialization of the inference request input data into a format that the XGBoost algorithm can process. Once the model has generated the inference output, the lambda function code will need to deserialize the output data into a format that can be sent back to the caller via MQTT.