You create a new LUIS application in the LUIS portal by providing the values for name, language, description and prediction resource.
You populate the domain with intents, entities and utterances.
Next, you train your application, create a prediction resource and publish the application to an endpoint URL.
While you query the endpoint URL for various utterances, you find the top intent and next intent scores are close enough.
You also find a few utterances that are not predicted for the labeled intent.
Given the scenario above, what are the three options you would use to improve the prediction accuracy?
Click on the arrows to vote for the correct answer
A. B. C. D. E.Correct Answers: A, B and D.
Option A is correct.
You can review dashboard analysis to find unclear predictions marked with Orange color and incorrect predictions marked with Red color.
Option B is correct.
You can log user queries by enabling active learning, reviewing the utterances with lower prediction scores and review/modify the intents and entities to improve the model accuracy.
Option C is incorrect because the value of the endpoint query parameter log has to be maintained as true to log user queries.
Option D is correct.
In the LUIS portal, you can use a pattern to improve prediction accuracy for utterances.
Option E is incorrect because adding more example utterances would not fix the utterances with incorrect or unclear predictions.
References:
To learn more about improving the prediction accuracy of a LUIS application, use the links given below:
In the scenario given, where the top and next intent scores are close enough and some utterances are not predicted for the labeled intent, there are several options to improve the prediction accuracy:
A. Review dashboard colors and find intents with incorrect/unclear predictions: This option is not very effective because it only helps to identify where the incorrect predictions are occurring. This option does not provide a solution to improve the accuracy of the predictions.
B. Enable active learning, capture endpoint queries and relabel entities: This is a good option to improve the accuracy of the predictions. Active learning allows the LUIS model to learn from user feedback by capturing endpoint queries and relabeling entities. When a user provides feedback by correcting a prediction, the active learning feature can use that information to improve the accuracy of future predictions.
C. Set endpoint query parameter log=false, capture endpoint queries and relabel entities: This option is not correct as the query parameter log=false means that the endpoint queries will not be captured, and thus, relabeling entities and improving accuracy won't be possible.
D. Add example utterances as pattern, train and publish application again: Adding example utterances as a pattern can be effective if the LUIS model is not able to recognize the user's input accurately. This option helps the model to learn new patterns and recognize user input more accurately. After adding new patterns, the LUIS model needs to be trained and published again to see the improvements.
E. Add more example utterances to improve prediction score: Adding more example utterances is a good option to improve the accuracy of the LUIS model. It helps the model to learn new patterns and recognize user input more accurately. However, this option may not always be effective if the LUIS model is not able to understand the underlying intent behind the user's input.
In conclusion, the best options to improve the prediction accuracy in this scenario are to enable active learning and capture endpoint queries, relabel entities, and/or add example utterances as patterns, train and publish the application again.