AutoML Hyperparameter Tuning: Configuring Early Termination for Bayesian Method | Microsoft DP-100 Exam Guide

Configuring Early Termination for Bayesian Method

Question

You are running Azure autoML experiments to find the best performing regression model for your dataset.

You want to use AML's hyperparameter tuning functionality.

You select the Bayesian method for sampling the hyperparameters, and you also want to limit the duration of the run.

How do you configure the Early termination?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer: C.

Option A is incorrect because TruncationSelectionPolicy cannot be used together with Bayesian sampling.

Option B is incorrect becauseMedianStoppingPolicy cannot be used together with Bayesian sampling.

Option C is CORRECT because when you select Bayesian sampling, early termination option cannot be used, i.e.

it has to be set to “None”.

Option D is incorrect because BanditPolicy is not applicable for Bayesian sampling.

Reference:

When conducting hyperparameter tuning with Azure AutoML, it is essential to have an early termination policy in place to prevent the experiment from running indefinitely. An early termination policy can be used to automatically stop poorly performing runs, which can save time and computing resources.

In this scenario, the user wants to limit the duration of the run while using the Bayesian method for sampling the hyperparameters. Therefore, the correct answer is option D, which specifies the use of a BanditPolicy for early termination.

The BanditPolicy is an early termination policy that uses a multi-armed bandit algorithm to determine which runs to terminate early. This algorithm allocates more resources to better performing runs and fewer resources to worse performing runs. This approach can help to maximize the efficiency of the experiment by allocating resources to the runs that are most likely to produce a good result.

Option A, TruncationSelectionPolicy, is another early termination policy that terminates runs that fall below a certain percentile of the best performing runs. This policy is useful when the goal is to find the best performing runs quickly, but it may not be suitable for experiments with limited time or resources.

Option B, MedianStoppingPolicy, is an early termination policy that stops runs when their performance falls below the median of the best performing runs. This policy is suitable for experiments where the goal is to find a good enough result rather than the best possible result.

Option C, None, is not a valid choice for early termination policy as it means no early termination policy is applied, which can cause the experiment to run indefinitely.