Deploying Machine Learning Model on Azure Kubernetes Service (AKS) | Exam DP-100 Solution

Deploying Machine Learning Model on AKS

Question

After successfully training your ML model and after selecting the best run, you are about to deploy it as a web service to the production environment.

Because you anticipate a massive amount of requests to be handled by the service, you choose AKS as a compute target.

You want to use the following script to deploy your model:

# deploy model from azureml.core.model import Model service = Model.deploy(workspace=ws,  name = 'my-inference-service',  models = [classification_model],  inference_config = my_inference_config,  deployment_config = my_deploy_config,  deployment_target = my_production_cluster) service.wait_for_deployment(show_output = True) 
After running the deployment script, you receive an error.

After a short investigation you find that an important setting is missing from the inference_config definition:

# inference config from azureml.core.model import InferenceConfig inference_config = InferenceConfig(runtime= "python",  source_directory = 'my_files',  <insert code here>,  conda_file="environment.yml") 
You decide to add <cluster_name = ‘aks-cluster'> Does this solve the problem??

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B.

Answer: B.

Option A is incorrect because the InferenceConfig object is used to combine two important things: the entry script and the environment definition.

The entry_script defines the path to the file that contains the ML code to execute, therefore it is missing and it must be set: entry_script="my_scoring.py".

Option B is CORRECT because the cluster_name parameter is actually important for the deployment, but it is part of the ComputeTarget configuration (which is, in your case, the my_production_cluster), i.e.

set elsewhere in your code.

Reference:

Adding the parameter cluster_name = ‘aks-cluster' to the InferenceConfig definition in the deployment script does not solve the problem, as this parameter is not valid for the InferenceConfig class.

The InferenceConfig class is used to define the configuration for running inference on the model, such as the runtime environment and entry script. It does not include any information about the compute target or deployment configuration.

To specify the compute target for deploying the model as a web service, you need to include the deployment_target parameter in the Model.deploy() method, as shown in the original deployment script.

Therefore, the correct way to deploy the model to an AKS cluster would be to update the deployment_target parameter in the Model.deploy() method to specify the AKS compute target, like this:

python
# deploy model from azureml.core.model import Model service = Model.deploy(workspace=ws, name = 'my-inference-service', models = [classification_model], inference_config = my_inference_config, deployment_config = my_deploy_config, deployment_target = my_production_cluster) service.wait_for_deployment(show_output = True)

Where my_production_cluster should be a reference to the AKS compute target object that you have previously created in your Azure Machine Learning workspace.

In summary, adding cluster_name = ‘aks-cluster' to the InferenceConfig definition in the deployment script does not solve the problem, and the correct way to specify the AKS compute target is by using the deployment_target parameter in the Model.deploy() method.