There serious hallucinations by AI and there are funny hallucinations by AI. Do we want our various AI models, from time to time, to crack a serious or funny joke? Well, that’s a bit the spice of life. However, not knowing when the machine is joking and when it is serious, this is more likely to seriously disturb most of us. This reminds us of our school days were teachers were not amused some pupils not taking them seriously in their efforts to transmit information. Now we know that a good atmosphere is conducive for better learning progress. AI as teaching and learning assistance could well work best in a “fearless“ classroom. Repeating a lesson several times and at your own learning rhythm will help independent of the seriousness of your teacher. Self-directed learning with a little help by AI might do the trick for many to advance how and when they feel ready for it. Hallucinations rates are a standard test for AI models. They range from 1% to 25% of queries. This is not in itself a problem. It has become tough to find out about the 1% -2% models because you no longer expect them to give wrong information. These are the 1-2 out of a hundred of cases where we are confronted with serious hallucinations, seriously.
(Image: extract from „cum Polaroids“ from Eva & Adele, Hamburger Bahnhof, Berlin 2024-5-22)