You are going to train an ML model based on a neural network (NN) algorithm, using autoML.
Based on your experience from previous experiments, the model's performance is expected to degrade after a number of iterations which can easily result in unnecessarily long execution times with no practical result.
In order to save time, you decide to instruct autoML to stop iterations when the current run significantly underperforms the best of the previous runs.
How do you set this instruction in the Python code?
Click on the arrows to vote for the correct answer
A. B. C. D.Answer: D.
Option A is incorrect because the Median stopping policy terminates the execution when the performance metric proves to be worse than the median of the running averages for the runs.
Option B is incorrect becausesetting the termination policy to ‘None' won't provide the expected result, i.e.
the execution of the iterations will run, even if the model's performance degrades after a certain number of iterations.
Option C is incorrect because delay_evaluation is a parameter of the termination policies.
It instructs the selected termination rule to take effect only after the set delay value.
Therefore, it is not a termination policy on its own.
Option D is CORRECT because the Bandit policy setting for early termination has to be used if you want to terminate the iterations when the primary performance metric underperforms the best of the previous runs.
All the other policies use different termination conditions.
Reference:
The correct answer is D. early_termination_policy = BanditPolicy(...).
Explanation:
AutoML is a feature provided by Azure Machine Learning service that enables developers and data scientists to automatically train machine learning models on their data. AutoML provides a variety of algorithms to choose from, including neural networks (NN). However, the training process of these algorithms can be time-consuming, and in some cases, may lead to suboptimal results.
To address this issue, AutoML provides an early termination policy feature, which allows you to define criteria to stop the training process when the performance of the model stops improving or begins to degrade. This feature helps to save time and computing resources while still ensuring that the model is of acceptable quality.
In Python code, you can set the early termination policy for AutoML by creating an instance of a policy class and passing it as a parameter to the AutoMLConfig class. There are several built-in policy classes available in AutoML, including the following:
MedianStoppingPolicy: Stops the training process if the running median of the primary metric score does not show improvement over the last few iterations.
BanditPolicy: Stops the training process if the primary metric score of the current iteration is below a certain threshold compared to the best performing iteration.
TruncationSelectionPolicy: Stops the training process if a percentage of the lowest-performing models are discarded after each iteration.
NoTerminationPolicy: Does not terminate the training process at all, and runs until the maximum number of iterations or time limit is reached.
To stop the training process when the current run significantly underperforms the best of the previous runs, you should use the BanditPolicy. The BanditPolicy class takes two arguments: evaluation_interval and slack_factor. evaluation_interval specifies how often the policy should be evaluated, while slack_factor specifies how much slack (as a ratio) should be allowed in the primary metric score compared to the best performing iteration. Here is an example of how to set the BanditPolicy for AutoML:
pythonfrom azureml.train.automl import AutoMLConfig from azureml.train.automl.policy import BanditPolicy early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1) automl_config = AutoMLConfig( # other parameters... early_termination_policy=early_termination_policy )
In this example, the policy will be evaluated every 2 iterations, and if the primary metric score of the current iteration is 10% worse than the best performing iteration, the training process will be terminated early.
Therefore, the correct answer is D. early_termination_policy = BanditPolicy(...).