Deploying ML Model to Azure Kubernetes Service with Azure CLI (DP-100 Exam Preparation)

Deploying ML Model to Azure Kubernetes Service

Question

You have just finished with training of your ML model.

The model is now ready to be deployed to its production environment as real-time inferencing service.

Because of the very high load anticipated, for performance and scalability reasons you want to deploy the model as fraud-service-01 to Azure Kubernetes Service.

After connecting your aks-cluster-01 cluster to your workspace, you want to go on with the deployment by using the Azure CLI (with ML extension)

You have the following CLI command as template:

az ml model deploy  <insert code here> --model mymodel:1  <insert code here> --inference-config inferenceconfig.json  --deployment-config deploymentconfig.json 
Which two of the following parameters need to be added to the command?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Answers: B and D.

Option A is incorrect because the name parameter is used to set the name of the service to be deployed.

In this case it should be: myservice.

Option B is CORRECT because the name of the compute target, i.e.

the AKS cluster (in the sample: aks-cluster-01) must be specified.

Omitting the parameter will result in deploying the service to ACI.

Option C is incorrect because this parameter doesn't exist.

The parameter used to set the inference compute is “compute-target”.

Option D is CORRECT because the name of the service to be deployed is a required parameter.

The correct command looks like this: az ml model deploy -ct aks-cluster-01 -m mymodel:1 -n myservice -ic inferenceconfig.json -dc deploymentconfig.json.

Option E is incorrect because this parameter doesn't exist.

The parameter used to set the inference compute is “compute-target”.

Reference:

The correct parameters to be added to the CLI command are:

B. --compute-target aks-cluster-01 D. --name fraud-service-01

Explanation:

  • The parameter --compute-target specifies the name of the compute target to which the deployment will be made. In this case, the target is the AKS cluster named aks-cluster-01. Therefore, the parameter should be added as follows: --compute-target aks-cluster-01

  • The parameter --name specifies the name of the deployed model or service. In this case, the service name should be fraud-service-01, which is the name given in the question. Therefore, the parameter should be added as follows: --name fraud-service-01

The parameter --inference-target and the option --deployment-target are not valid options for the CLI command. The correct option for specifying the inference configuration file is --inference-config, followed by the name of the inference configuration file. Similarly, the correct option for specifying the deployment configuration file is --deployment-config, followed by the name of the deployment configuration file.

Therefore, the full command should look like this:

az ml model deploy --compute-target aks-cluster-01 --model mymodel:1 --inference-config inferenceconfig.json --deployment-config deploymentconfig.json --name fraud-service-01