You have trained a model on a dataset that required computationally expensive preprocessing operations.
You need to execute the same preprocessing at prediction time.
You deployed the model on AI Platform for high-throughput online prediction.
Which architecture should you use?
Click on the arrows to vote for the correct answer
A. B. C. D.D.
https://cloud.google.com/pubsub/docs/publisherThe correct answer to this question is D. Send incoming prediction requests to a Pub/Sub topic. Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic. Implement your preprocessing logic in the Cloud Function. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
The reason why option D is the best choice is that it allows for a scalable, reliable, and cost-effective solution for preprocessing data at prediction time. Here is how this architecture works:
Send incoming prediction requests to a Pub/Sub topic: The first step is to set up a Pub/Sub topic that will receive incoming prediction requests. Pub/Sub is a fully managed, asynchronous messaging service that decouples senders and receivers of messages, making it ideal for this scenario.
Set up a Cloud Function: Cloud Functions is a serverless execution environment that allows you to run your code in response to events. In this case, you will set up a Cloud Function that will be triggered when messages are published to the Pub/Sub topic.
Implement your preprocessing logic in the Cloud Function: The Cloud Function will contain your preprocessing logic, which will transform the incoming data into a format that can be used by the model. This will be the same preprocessing that was done during training.
Submit a prediction request to AI Platform using the transformed data: Once the data has been preprocessed, the Cloud Function will submit a prediction request to AI Platform, which is a fully managed machine learning platform that allows you to build, train, and deploy machine learning models at scale.
Write the predictions to an outbound Pub/Sub queue: Finally, the predictions will be written to an outbound Pub/Sub queue, which will allow other applications to consume the predictions.
This architecture has several advantages. First, it allows for the same preprocessing logic to be applied to the incoming data at prediction time, ensuring that the model receives data in the correct format. Second, it is scalable, as Cloud Functions can handle a large number of incoming requests. Third, it is cost-effective, as Cloud Functions are charged based on the number of invocations and the duration of execution. Finally, it is reliable, as Cloud Functions are automatically managed by Google Cloud, which ensures high availability and fault tolerance.