Deploying LUIS Model in Docker Containers: Troubleshooting Guide

Troubleshooting Guide

Question

You are tasked to deploy the latest trained or published LUIS model in your own environment for compliance reasons.

You use the docker containers to answer query predictions for user utterances.

To use the container, you export the package from the LUIS portal, you first select the latest prediction or published version.

Then you select export for containers (GZIP) option.

Lastly you move the file to the output mount directory of the container host and run the container to query the container's prediction endpoint.

However, you realize that you are not getting the query results. Review the steps in the scenario above and select the step that was not performed correctly:

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Correct Answer: B.

Option A is incorrect because selecting an export for containers (GZIP) is the correct option as it is required for exporting the versioned application model.

Option B is correct because the exported file needs to be moved to the input mount directory of the host computer.

The out mount directory is used to import the query logs in the LUIS portal.

Option C is incorrect because the requirement is to choose the latest trained or published version from the LUIS portal to run in on a local host container.

The scenario describes a process for deploying a Language Understanding Intelligent Service (LUIS) model in a Docker container. The goal is to answer query predictions for user utterances while meeting compliance requirements by running the container in a controlled environment. The steps are as follows:

  1. Select the latest trained or published version of the LUIS model. This step is important because it ensures that the latest changes to the model are included in the exported package.

  2. Export the LUIS model package for containers (GZIP) from the LUIS portal. This option prepares the model for deployment in a Docker container.

  3. Move the exported package to the output mount directory of the container host. This step is necessary to make the package available to the container at runtime.

  4. Run the container to query the prediction endpoint. This step involves executing the container and submitting queries to the prediction endpoint to get predictions for user utterances.

Based on the above steps, the likely cause of the issue where the query results are not being returned is either step 2 or step 3. However, since step 2 is necessary for preparing the model for deployment, it is unlikely that the problem is with step 2. Therefore, the most probable cause of the issue is that step 3 was not performed correctly.

Step 3 involves moving the exported package to the output mount directory of the container host. This step is necessary to make the package available to the container at runtime. If the package is not located in the correct directory, the container will not be able to access it, and the query results will not be returned.

Therefore, the correct answer is B: move the file to the output mount directory of the container host.