As the lead ML Engineer for your company, you are responsible for building ML models to digitize scanned customer forms.
You have developed a TensorFlow model that converts the scanned images into text and stores them in Cloud Storage.
You need to use your ML model on the aggregated data collected at the end of each day with minimal manual intervention.
What should you do?
Click on the arrows to vote for the correct answer
A. B. C. D.D.
The best option for deploying the TensorFlow model for daily usage with minimal manual intervention is D - Deploy the model on AI Platform and create a version of it for online inference.
Explanation:
A. Use the batch prediction functionality of AI Platform: Batch prediction is a useful feature for generating predictions for a large amount of data in batches. However, it may not be the best option in this scenario as the question specifically mentions that the data is collected at the end of each day. It implies that the data is not continuously streamed, and there is a fixed amount of data available at the end of each day. Therefore, batch prediction is not an ideal solution for this use case as it requires a fixed set of input data.
B. Create a serving pipeline in Compute Engine for prediction: Creating a serving pipeline in Compute Engine may require more manual intervention and maintenance than necessary. Additionally, it may require custom coding and infrastructure management, which could be time-consuming and costly.
C. Use Cloud Functions for prediction each time a new data point is ingested: Cloud Functions are triggered by specific events, such as a file upload, a new message in a queue, or a change to a database. While this may be useful for real-time processing, it may not be the best option for this use case as it requires a trigger for each data point that is ingested. Additionally, deploying the TensorFlow model as a Cloud Function may require more infrastructure and code management than necessary.
D. Deploy the model on AI Platform and create a version of it for online inference: Deploying the model on AI Platform and creating a version of it for online inference is the best option for this use case. AI Platform provides a managed environment for deploying, scaling, and monitoring machine learning models. It enables users to deploy models for both batch and online inference with minimal manual intervention. In this use case, the model needs to be deployed for online inference to handle new data at the end of each day. With AI Platform, the model can be deployed and versioned, and the predictions can be retrieved through REST API endpoints.
In summary, deploying the TensorFlow model on AI Platform and creating a version of it for online inference is the most suitable option for this use case, as it provides a scalable, managed environment with minimal manual intervention.