The study by Myra Chen et al. (2026) on the practical use of various AI tools demonstrates the risks of social sycophancy of these models. Maybe a large part of the initial success of AI models exactly due to sycophancy i. e. the people-pleasing, flattering and affirmative bias of these models. If users of AI just receive predominantly confirmations and reassurance of their intended behavior, they shall be less inclined to accept more outright criticism in normal interactions with real people. The more you receive flattering responses by some people, the more likely they have used AI in preparing themselves for a response. The rigorous psychological tests applied in the paper can in fact explain a large part of why we are likely to become addicted to the always flattering responses from the current versions of AI. Only the scientists will consciously seek for disapproval of their beliefs and keep challenging the AI-provided returns. Even using different AI models did not change the affirmation bias. Maybe programming a “grumpy old professor AI” as an alternative could do the trick. I shall have to think seriously about this as the alternative to current models. The critical AI is most likely not a viable business opportunity, but it might survive many other sycophantic AI unicorns. (Image: waist coat 18th century, Paris exhibit Musée de la mode 2026).































