Your team has been tasked with creating an ML solution in Google Cloud to classify support requests for one of your platforms.
You analyzed the requirements and decided to use TensorFlow to build the classifier so that you have full control of the model's code, serving, and deployment.
You will use Kubeflow pipelines for the ML platform.
To save time, you want to build on existing resources and use managed services instead of building a completely new model.
How should you build the classifier?
Click on the arrows to vote for the correct answer
A. B. C. D.D.
When building an ML solution in Google Cloud to classify support requests for a platform, there are multiple options to consider. Based on the requirements, the team has decided to use TensorFlow to build the classifier so that they have full control of the model's code, serving, and deployment. Additionally, Kubeflow pipelines will be used for the ML platform.
Given the requirement to save time and build on existing resources, there are four possible approaches to building the classifier:
A. Use the Natural Language API to classify support requests. B. Use AutoML Natural Language to build the support requests classifier. C. Use an established text classification model on AI Platform to perform transfer learning. D. Use an established text classification model on AI Platform as-is to classify support requests.
Let's explore each of these options in detail and assess their suitability for the given scenario.
A. Use the Natural Language API to classify support requests:
The Natural Language API is a Google Cloud managed service that offers text analysis features, including sentiment analysis, entity analysis, and content classification. The API can be used to classify text into predefined categories or train custom models to classify text into specific categories. The Natural Language API can be a good option if the team is looking for a quick and easy way to classify support requests, without worrying about the underlying model's implementation. However, since the team wants to use TensorFlow and Kubeflow pipelines, this option may not be the best choice. The Natural Language API does not provide the flexibility of using custom models or integrating with TensorFlow, so it may not be a good fit for the team's requirements.
B. Use AutoML Natural Language to build the support requests classifier:
AutoML Natural Language is a Google Cloud managed service that allows users to build custom text classification models without the need for extensive ML expertise. AutoML Natural Language uses a graphical interface to train models, and it can automatically select the best model architecture and hyperparameters for the given dataset. This option could be a good choice if the team wants to build a custom model quickly, without needing to write any code. However, AutoML Natural Language does not provide the level of control and customization that TensorFlow offers, so it may not be the best choice if the team wants full control over the model's implementation.
C. Use an established text classification model on AI Platform to perform transfer learning:
AI Platform is a Google Cloud managed service that offers pre-trained ML models that can be used for various tasks, including text classification. The service also allows users to perform transfer learning by fine-tuning pre-trained models on their own data. This option could be a good choice if the team wants to use an established model and fine-tune it on their data, which can save time and resources compared to building a custom model from scratch. However, since the team wants to use TensorFlow, this option may not be the best choice. AI Platform uses TensorFlow as the underlying ML framework, but it does not offer the same level of flexibility and control over the model's implementation as building a custom model using TensorFlow directly.
D. Use an established text classification model on AI Platform as-is to classify support requests:
This option involves using an established text classification model on AI Platform without any customization. This option could be a good choice if the team wants to save time and resources by using an existing model that is already well-suited for the task of classifying support requests. However, since the team wants to use TensorFlow and Kubeflow pipelines, this option may not be the best choice. Using an existing model on AI Platform would not allow the team to use TensorFlow directly or take full advantage of Kubeflow pipelines.
Overall, based on the requirements, Option C may be the best choice for the team. It allows the team to use TensorFlow, perform transfer learning on a pre