Artificial intelligence and data science must instill trust because good decision-making depends on it, which, in turn, drives better outcomes, reputation, and ultimately adoption.
🤖 So it's a core message of my book that if we are to replace or extend software systems with A.I. systems, we have to guarantee improvements in trustworthiness. And producing trustworthy insights and models is a constant struggle in data science.
⚖️ Interpretable Machine Learning (a.k.a Explainable AI) provides tools to address trust/ethical concerns organized in three levels: 𝗙𝗮𝗶𝗿𝗻𝗲𝘀𝘀, 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 — collectively known as F.A.T. I like to see these in a pyramid structure because each level depends on the one beneath it. And there are interpretability tools to diagnose problems on each level as well as to fix each problem.
📖 It's an extensive area of active research with hundreds of methods. My book is an introduction with several forays into advanced topics.