AWS Certified Machine Learning - Specialty: Evaluation Metrics and Optimization Direction for Hyperparameter Tuning

Machine Learning Model Evaluation Metrics and Optimization Direction for Hyperparameter Tuning | AWS Certified Machine Learning - Specialty

Question

You work as a machine learning specialist for a social media software company.

Your company produces social media game apps.

Your machine learning team has been asked to produce a machine learning model to predict user purchase of apps similar to apps they have already purchased.

You have created a model based on the SageMaker built-in XGBoost algorithm.

You are now using hyperparameter tuning to get the best performing model for your problem.

Which evaluation metrics and corresponding optimization direction should you choose for your automatic model tuning (a.k.a.

hyperparameter tuning)? (Select TWO)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Correct Answers: C and E.

Option A is incorrect.

XGBoost uses the f1 metric for model validation.

However, you will want to maximize this metric.

Option B is incorrect.

XGBoost uses the map (mean average precision) metric for model validation.

However, you will want to maximize this metric.

Option C is correct.

XGBoost uses the ndgc (Normalized Discounted Cumulative Gain) metric for model validation, and you will want to maximize this metric.

Option D is incorrect.

XGBoost uses the rmse (Root mean square error) metric for model validation, and you will want to minimize this metric.

Option E is correct.

XGBoost uses the mae (Mean Absolute Error) metric for model validation, and you will want to minimize this metric.

References:

Please see the Amazon SageMaker developer guide titled Define Metrics (https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-metrics.html),

The Amazon SageMaker developer guide titled Tune an XGBoost Model (https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html)

The choice of evaluation metrics and corresponding optimization direction depends on the specific problem and the business objective. In this case, the problem is to predict user purchase of apps similar to apps they have already purchased.

The SageMaker built-in XGBoost algorithm is a gradient boosting algorithm designed for regression and classification tasks. It has several hyperparameters that can be tuned to improve the performance of the model.

For this specific problem, we want to optimize the model to predict user purchases of apps based on similar apps they have already purchased. The evaluation metrics and corresponding optimization direction that can be chosen for hyperparameter tuning are:

  1. Normalized Discounted Cumulative Gain (NDCG): This is a ranking evaluation metric that measures the quality of the recommended items. In this case, the recommended items are the apps that users are most likely to purchase based on the apps they have already purchased. NDCG ranges from 0 to 1, with higher values indicating better performance. The optimization direction for NDCG is to maximize it, as we want to recommend the apps that users are most likely to purchase.

  2. Mean Absolute Error (MAE): This is a regression evaluation metric that measures the average difference between the predicted and actual values. In this case, the predicted values are the probabilities of user purchases, and the actual values are the binary labels indicating whether the users actually made the purchase. The optimization direction for MAE is to minimize it, as we want to minimize the difference between the predicted and actual values.

Therefore, the correct options for the evaluation metrics and corresponding optimization direction to choose for hyperparameter tuning in this case are:

C. NDCG, maximize E. MAE, minimize