Explainability in AI

Biased AI, Explainable AI, and Transparent AI Series – Part 1

It all starts with bias
From Amazon’s AI tool that would have automated their internal resume review process, to flaws found across many facial recognition algorithms, to multiple judicial sentencing tools, the potential for bias in AI tools’ results is well documented

Machine learning (ML) is currently the primary method of developing AI tools, and relies on vast quantities of data to “train” statistical models to reflect patterns found in images, documents, songs, and other digital content. In the examples above, the ML models were trained with historic operational and decision-making data, which resulted, unsurprisingly, in tools that demonstrated the same bias as those operations had exhibited.

The risk of encoding bias in ML tools would be significant enough if these tools were simply replacing current workers. Our actual risk is much greater, however, because ML tools are being readied for deployment at massive scale – not only replacing current staff, but, in many cases, providing increased capacity to firms that do not plan to hire human staff. This is the business model of many new AI companies, and will drive the use of the ML models across markets and industries. If the AI bias problem is not solved, these new companies will either fail, or the future of work will be one of hard-coded, algorithmic bias.

Two proposed solutions: Explainability and Transparency
The current industry conversation on the solutions to the AI bias problem focuses on explainability and transparency.

An Explainable AI system is one that not only provides the “intelligence” it is designed for, it also explains (in terms and constructs that the user community can understand) how it arrived at its output.

A Transparent AI system is one for which the developer (or organizational owner) of the AI system provides full documentation for the initial design of the ML models, the training data sets used, the final configuration of the ML models, and the operational results of the system.

The figure below illustrates the relationships between AI bias, explainability, and transparency.  The Figure is intended to provide a clear definition of Bias, Transparency, and Explainability, and to illustrate three concepts:
1. Transparency, Explainability, and Bias are three separate characteristics that AI systems may exhibit
2. Transparency, Explainability, and Bias are the result of both system design and AI tool vendors’ business practices
3. An AI system that is both transparent and explainable may still exhibit bias

 

 

 

In future newsletters, we will explore the following topics as we delve deep:

  • What technical approaches are available to achieving explainability and transparency and either eliminating or significantly reducing bias
  • How the FAR affects each of these elements and may present constraints on solutions
  • What steps we recommend for commercial AI tool vendors to take
  • What steps federal agencies and their leadership might take

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *