Why is explainability vital in human-AI teaming?

 title: 'Fig. 1: Comparison of the strengths of humans and statistical ML machines, illustrating the complementary ways they generalise in human-AI teaming scenarios. Humans excel at compositionality, common sense, abstraction from a few examples, and robustness. Statistical ML excels at large-scale data and inference efficiency, inference correctness, handling data complexity, and the universality of approximation. Overgeneralisation biases remain challenging for both humans and machines. Collaborative and explainable mechanisms are key to achieving alignment in human-AI teaming. See Table 3 for a complete overview of the properties of machine methods, including instance-based and analytical machines.'

Explainability is vital in human-AI teaming because it allows humans to assess AI responses and access the rationales or explanations behind those responses. This understanding fosters trust and ensures that the AI's decisions align with human values and expectations. As noted, 'effective teaming requires that humans must be able to assess AI responses and access rationales that underpin these responses'[1].

Moreover, the ability to explain AI decisions supports accountability, transparency, and adherence to legal frameworks, which is increasingly important in contexts where AI participates in decision-making processes that affect individuals and society[1].