As humans, we like the feeling to be in control of things. This applies even to immaterial things like religious beliefs. Generative AI has created problems with its hidden structures and lack of transparency of their applications of algorithms (and combinations of algorithms) to basic data bases of knowledge and information. The use of xAI, which stands for explainable artificial intelligence, can address some of the concerns about the lack of transparency and explanation of responses from AI systems. Many users want to know in advance about the consequences of the use of specific words or notions in an instruction to AI. The interpretation of each single word by xAI can inform about the precision of interpretation (cheap versus cheapest, for example) or highlight the sensitivity to gender-neutral language or not in its guidelines. Additionally, ex post the xAI could indicate alternative notions in a prompt and, briefly, how this would affect results.
Yes, there is a trade-off between brevity of answer and room for explanations. As in psychology, there some value in a “thinking aloud” procedure for respondents in order to better understand (implicit) the reasoning behind a reply. xAI takes us a step further in this direction of asking AI to think aloud or more explicitly in a human compatible way of logic and broader reasoning.
Put AI on the psychotherapist’s bench and xAI will be to the advantage of many more humans again. Humans just don’t like black box systems that lack the necessary as well as sufficient transparency. (Image on the right: Patrick Jouin, chaise solide C2, MAD digital humanism).































