As humans, we like the feeling to be in control of things. This applies even to immaterial things like religious beliefs. Generative AI has created problems with its hidden structures and lack of transparency of their applications of algorithms (and combinations of algorithms) to basic data bases of knowledge and information. The use of xAI, which stands for explainable artificial intelligence, can address some of the concerns about the lack of transparency and explanation of responses from AI systems. Many users want to know in advance about the consequences of the use of specific words or notions in an instruction to AI. The interpretation of each single word by xAI can inform about the precision of interpretation (cheap versus cheapest, for example) or highlight the sensitivity to gender-neutral language or not in its guidelines. Additionally, ex post the xAI could indicate alternative notions in a prompt and, briefly, how this would affect results.
Yes, there is a trade-off between brevity of answer and room for explanations. As in psychology, there some value in a “thinking aloud” procedure for respondents in order to better understand (implicit) the reasoning behind a reply. xAI takes us a step further in this direction of asking AI to think aloud or more explicitly in a human compatible way of logic and broader reasoning.
Put AI on the psychotherapist’s bench and xAI will be to the advantage of many more humans again. Humans just don’t like black box systems that lack the necessary as well as sufficient transparency. (Image on the right: Patrick Jouin, chaise solide C2, MAD digital humanism).





























(See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.


