Microsoft Azure AI: Ensuring Transparency in Your AI System

Meeting the Microsoft Transparency Principle for Responsible AI

Question

You are building an AI system.

Which task should you include to ensure that the service meets the Microsoft transparency principle for responsible AI?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

C

https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principles

The correct answer is D. Ensure that a training dataset is representative of the population.

Microsoft has established a set of guiding principles for responsible AI, one of which is transparency. Transparency means that developers should make their AI systems explainable, so that users can understand how the AI system arrived at its recommendations or decisions. One way to ensure transparency is to make sure that the training data used to train the AI model is representative of the population it will be used on.

The training data used to build an AI model should be diverse, balanced, and representative of the population it will be used on. If the training data is biased or incomplete, the AI model may produce biased or incomplete results. For example, if an AI model is trained on data that only includes data from one demographic group, it may not work well for other groups.

Therefore, it is essential to ensure that the training dataset is representative of the population, to avoid any potential biases in the AI model. This can be done by selecting a diverse dataset, ensuring a balanced representation of different demographics, and periodically reviewing and updating the dataset to ensure its continued accuracy and relevance.

Option A, ensuring that all visuals have an associated text that can be read by a screen reader, is a good practice for making AI systems accessible to people with disabilities. Option B, enabling autoscaling to ensure that a service scales based on demand, is a good practice for ensuring high availability and efficient resource usage. Option C, providing documentation to help developers debug code, is a good practice for improving the quality and reliability of an AI system, but it is not directly related to ensuring transparency.