You work for a mining company where you are responsible for the data science behind identifying the origin of mineral samples.
Your data origins are Canada, Mexico, and the US.
Your training data set is imbalanced as such: Canada | Mexico |US| 1,210| 120|68 | You run a Random Forest classifier on the training data and get the following results for your test data set (your test data set is balanced): Confusion matrix: Predicted_ Observed| Canada | Mexico | US | Accuracy | Canada | 45|3 |0 |94%| Mexico | 5|38 |5 |79%| US | 19|8 | 21|44%| In order to address the imbalance in your training data, you will need to use a preprocessing step before you create your SageMaker training job.
Which technique should you use to address the imbalance?
Click on the arrows to vote for the correct answer
A. B. C. D.Answer: A.
Option A is correct.
The SMOTE sampling technique uses the k-nearest neighbors algorithm to create synthetic observations to balance a training data set.
(See the article SMOTE Explained for Noobs)
Option B is incorrect because the Spark pipeline creates one-hot encoded columns in your data.
One-hot encoding is a process for converting categorical data points into numeric form.
This won't do anything to address the imbalance in your training data.
(See this explanation of one-hot encoding)
Option C is incorrect because it splits a feature (data point) in your observations into multiple features per observation.
This also will have no impact on your imbalanced training data.
(See the article Fundamental Techniques of Feature Engineering for Machine Learning)
Option D is incorrect because the min-max normalization technique is used to normalize data points into a range of 0 to 1, for example.
(See the Wikipedia article Feature Scaling)
Reference:
Please see the article How to Handle Imbalanced Classification Problems in machine learning.
The correct answer is A. Run your training data through a preprocessing script that uses the SMOTE (Synthetic Minority Over-sampling Technique) approach.
Imbalanced data is a common issue in machine learning where one class of data dominates the others. In this case, the Canada class is the majority class while Mexico and the US are minority classes. An imbalanced dataset can lead to poor model performance, especially for the minority classes.
SMOTE is a technique used to address the issue of imbalanced data. SMOTE generates synthetic samples for the minority class by creating new samples that are a linear combination of existing samples. The new samples are generated by taking the difference between feature vectors and multiplying it by a random number between 0 and 1. The synthetic samples are then added to the training dataset, which helps to balance the number of samples in each class.
In this scenario, using SMOTE on the training data will help to balance the number of samples in each class. This will improve the performance of the Random Forest classifier by reducing the bias towards the majority class.
Option B, running the data through a Spark pipeline in AWS Glue to one-hot encode the features, does not address the issue of imbalanced data. One-hot encoding is a technique used to convert categorical data into a format that can be used by machine learning algorithms.
Option C, running the training data through a preprocessing script that uses the feature-split technique, is not a recognized technique for addressing imbalanced data. The feature-split technique splits the data into two subsets based on a selected feature, but it does not address the issue of imbalanced data.
Option D, running the training data through a preprocessing script that uses the min-max normalization technique, is a technique used to scale data to a range between 0 and 1. However, this technique does not address the issue of imbalanced data.
In summary, to address the issue of imbalanced data in this scenario, we should use SMOTE, which generates synthetic samples for the minority class to balance the number of samples in each class.