In order to dig deeper into the functioning of AI, I deemed it expedient to construct, for example, a simple chatbot on a limited knowledge base from my own writings on AI (link to reader in previous blog entry here).
A toolbox from Google offers powerful assistance in such an endeavour. The outcome uses only my input text and no other sources. It is dynamic in the sense that it interprets questions and searches within the text file provided only. The answers are edited with a LLM (large language model) and provide flawless English texts. You can try it here using catchat as magic formula and Google account so far.
With a bit of programming knowledge (htlm, python, Java) and related learning sites it is feasible to come up with a “static” chatbot hosted at a free of charge provider as well. For learning purposes this step by step building and coding of a chatbot is helpful. The outcome is rather limited or requires a lot of time to increase the scope of Q & A interactions and to move from a static (predefined Q & As) to dynamic ones.
Full control of answers, excluding any hallucinations and high-speed replies, come at a cost. Take a look here. It is a very basic version so far, just to get the idea of it. full web address:
https://schoemannchatbot.eu.pythonanywhere.com/
Chatbot Me
Chatbots are helpful to allow queries to larger data sets like the blog entries here. So here is a try of a Chatbot to query all entries on AI using ChatGPT to create a Chatbot that uses and references it source from www.schoemann.org/tag/ai and the AI reader in pdf-format.
Please send me an email if the hallucinations of this Chatbot 1.0 on AI from a social science perspective are giving strange results. I’ll get back to you. Please use at your own risk as I cannot guarantee for all answers. The usual disclaimer applies here.
ChatGPT proposed the following set of Questions and Answers on the blog for an entry into the chat: Example Q&A with the chatbot
Q: What are the social science concerns with AI?
A: Bias in results, job shifts, democracy risks, privacy, and new inequalities.
Q: What does the text say about reinforcement learning?
A: It’s seen as the next step for AI: focusing on learning and reasoning, not just predicting text. It also uses fewer resources.
Q: How are robots described in the document?
A: Robots are mostly assistants. They can follow people or carry small items, but more complex tasks need sensors and AI training.
Q: What about biased results?
A: Studies can be misleading if control groups are flawed. AI faces the same challenge — social scientists warn: “handle with care”.
Q: What is Schoemann’s blog view on AI?
A: He links AI to energy use, fairness, and its role in the “all-electric society” — stressing efficiency and responsibility.
More on the chatbot (in testing phase) and the Link to the coding help received from ChatGPT on this mini-test-project :
https://chatgpt.com/share/68c1d160-0cc0-8003-bf04-991b9e7c3b24
AI Podcasting Me
Content producers have lots of tools at their disposal to get their content across to very different audiences. For some time the traditional media of newspapers, radio and TV were the prime outlets for content distribution. Social media have changed this to many more senders of content than before.
In the 21st century, AI allows to automate media productions. In a trial run I just used Google’s NetbookLM to generate 3 podcasts based on my own writings on AI over more than a year by now. The result is available and using artificial voices it is possible to broadcast yourself without revealing your own personal voice. I am not done with the evaluation of the outcome(s) yet, but the first impression is an interesting other form to spread content.
More tests are necessary to check for hallucinations as well.
Here are the links to my virtual podcasts:
AI, intimacy and insecurity
AI, Society and the Human Spirit
AI and the Human Mosaic: Navigating Our Interconnected Future
Video Doku by AI
Based on my own blog on this webpage “schoemann.org” Google NotebookLM creates a video of about 7 minutes. Using Microsoft Clipchamp automatic subtitles with a slightly different storyline are produced based on the video data. In the end, the blog entries are re-modelled into something like a lecture on “AI in a wider social context” (see and play below). No voice layover so far, read by yourselves. A podcast format is another option.
It feels like walking across landscapes in my own mind. Content creators of today or the past never imagined the impact they might have through the powerful tools of AI. The only caveat, jokes I incorporated into the texts cannot really be handled by AI tools unless they are explicitly designated as such. These AI tools take me much more seriously as I do myself. This is serious.
Mind Map Me
AI tools are great to assist learners in the task to get more structure into larger documents or books. It is up to the teachers or lecturers to use the tools themselves to pre-structure content they want other persons to learn. Mind maps are useful to summarise larger content and offer a tree-like structure to a text moving from the general to more specific content and then into details by at the same time not loosing sight of the overall structure of the content. Basics can be provided by Google’s NotebookLM and you may rework this basic structure yourself linking the mind map to the detailed content. Learning may start with a comprehensive mind map at the beginning to move on to details. Alternative versions of a mind map are equally feasible to come up with new combinations of subjects. This can be done using the tags of the blog entries in addition to the categories and fast search keywords.
It is a fascinating way to mind map yourself based on longer texts written by yourself. This clarifies a bit what potential readers or learners are likely to retain from reading your document(s) or blog entries.
Its worth trying, as others will most likely use these tools from now onwards anyway, whether we like it or not.
Image: Google NotebookLM Mind Map of my AI blog entries 2025-9-9.
Own AI related entries
The digital drafting of blog posts facilitates the publishing of volumes based on a selection of entries. Pdf-files, or EPUB formats are quite common. The choice of a flipbook is more suitable for larger screens of tablets or notebooks. It also serves as a rapid stock taking of entries and missing topics to cover a broader field. In some fields AI is not yet popular, for example in agentic home applications, whereas in other fields text, translation or image production AI has grown into a threat to professional fields. Even computer languages are a major field of application with substantial negative employment effects.
Hence, enjoy the latest comments on AI towards the end of my latest Flipbook.
Just click here.
ChatGPT provides a summary of my AI-reader as follows:
From the uploaded text AI from a Social Science Perspective (Dr. Klaus Schoemann), social science concerns with AI focus on potentials and limitations of how AI interacts with society:
- Labour and employment: AI impacts jobs, but adjustment processes are slow. For example, there’s a shortage of engineers and care workers, while robots are still limited to simple assistant tasks. This raises concerns about who benefits and who is left behind.
- Bias and fairness: The book highlights issues of biased results in research and decision-making. When AI systems rely on incomplete or selective samples, results can be misleading — a challenge also familiar to social scientists.
- Democracy and accountability: Journalists and scientists are framed as part of a “checks and balances” system. AI adds complexity here, as the fast spread of information without accountability can weaken democratic processes.
- Human–machine relations: Topics like intimacy, housing, and memory are re-thought in the context of AI and digitalization, raising questions about privacy, social norms, and cultural heritage.
- War and security: AI-driven technologies like drones change the nature of warfare, raising ethical and political questions.
On his blog, Schoemann also stresses the energy and resource costs of large AI models and how reinforcement learning offers a more efficient alternative. He warns that AI should be understood not just technically, but as part of an “all-electric society” where social sciences help assess risks, inequalities, and long-term consequences.
👉 In short: AI raises social science concerns about equity, bias, democracy, labour, cultural shifts, and global risks.
Marc Aurel AI
In the 21st century it is possible to chat with Marcus Aurelius. Part of the exhibition at the Simeonstift is a chatbot you may freely consult and questions with or about Marc Aurel. Based on your questions the animated screen image of Marc Aurel will reply based on his own writings like the Meditations and (probably) other secondary literature on Marc Aurel. Questions about feminism or slavery are answered based on the original texts. Some of these answers appeared rather modern like the basic equality of all including women or slaves. The Meditations are an idealistic vision of mankind in the stoic tradition. In practice such ideals have proven very ambitious for the many and growing temptations in the day-to-day lives of ordinary people including their political, religious, business and military leaders. The AI is confronted with the issue to give answers to ethical questions which refer to the time of the author, but not all can apply to today’s ethical standards and basic human rights. Reading the original source, therefore, remains the preferred choice.
AI earnings effects
In the first few years of wider adoption of AI in an economy, there is the expectation that this might lead to substantial productivity gains for enterprises which use it as well as for employees who are early adopters of the relatively new technology. The study by the Stanford Digital Economy Lab by Chen, Chandar and Brynjolfsson (2025) showed that so far there are no significant earnings effects for employees. Based on millions of recent payroll data from US companies productivity gains have not trickled through to the paycheck in terms of monthly salaries. Participation of staff in a company’s overall turnover or profit might change this as time evolves. For civil servants the adoption of AI might mean increases in cases dealt with as some tasks can be executes faster than before with the use of AI.
The evidence points to employment effects of AI rather than earnings effects so far. A hypothesis is yet unresolved: senior employees using AI might employ fewer junior workers at entry positions, if these “hallucinating” young professionals can be replaced by hallucinating AI. In science the hallucination has sometimes lead to disruptive new approaches and findings. It is a tough choice to pick the young entrants with high productivity potential and eventually high remuneration for this in terms of labor earnings.
AI employment effects
The first robust empirical evidence about employment effects of AI in the USA has been published by the Stanford Digital Economy Lab by Chen, Chandar and Brynjolfsson (2025). A previous paper by Wang and Wang (2025) highlighted the comparative advantage of persons who use AI in their work compared to others and the authors coined the term “learning by using technology”. The prediction of the model was that there might be job losses of more than 20% in the long run and half of this already in the first 5 years of the introduction of the technology. The Stanford economists have estimated with real world data these effects in the USA and find quite surprisingly that the negative employment effects of AI have the strongest impact on young labor market entrants with few years of labor market experience. Middle-aged and more senior employees seem to benefit from “tacit knowledge” about the work, which is more difficult to replace with AI, at least for the time being of the early days of AI. This evidence is based on recent payroll data from the largest payroll processing firm “ADP” in the USA which has firms overrepresented from the manufacturing and services industries as reported in another paper (Firm size maybe another source of bias). However, the effect that youth 22-25 years of age suffered the most calls into question the common belief that older workers are more likely to suffer the consequences as during in the rise of the digital economy around the year 2000. (AI Image created with Canva)
Scienceploitation
Science can be exploited to make unjustified profits from referring incorrectly to it. Social sciences, like economics may be used by banks to sell you products that refer to science only as part of their arguments if the science based inference fits their purpose. Scienceploitation is very common in the field of para-medicine and para-pharmaceutical products. Health promises sell. The time until an ineffective treatment reveals the unrealistic promise to be unachievable considerable profits have accumulated on the side of the selling company. Science has a hard time to counter the perils of scienceploitation. Advanced knowledge can be used and abused as any other method of convincing people to buy or subscribe to a product. The responsibility of the scientific community consists also in finding ever new ways to counter scienceploitation. AI will pose additional challenges as well as opportunities.
Bench the benchmarks
In the social sciences as well as in engineering it is common practice to use benchmarks as indicators of performance. Thereby, several countries or regions within a country are compared with respect to quantitative indicator. Let’s take employment ratios. A higher employment ratio, which includes many persons working few hours in part-time work, is different from a slightly lower employment ratio, but hardly any part-time employees.
The same rationale holds true for benchmarks of AI systems or the newer versions of agentic AI that are under construction in many fields. The paper by Yuxuan Zhu et al. (2025) proposes the ABC (agentic behavior checklist) for agentic AI developers. The reporting of benchmarks by such models should include (1) transparency and validity, (2) Mitigation efforts of limitations and (3) result interpretation using statistical significance measures and interpretation guidelines.
The aim of this research is to establish a good practice in establishing benchmarks in the field of agentic AI. The sets of criteria to test for is large and the focus of how the agentic AI treats, for example, statistical outliers much above or below the average i.e. (> 2 standard deviations from the average) assuming a normal distribution, is one case of application only.
We welcome the efforts to bench the benchmarks in the field of AI as is common practice in other sciences as well.
Learning by using
Is learning by using different from learning by doing? In an economic model to test the employment/unemployment impact of AI in the USA, Wang & Wong (2025) suggest an important impact of employees’ productivity due to learning by using AI. In terms of the traditional language of economics the employees who use AI in their work shall have comparative advantage to those who don’t.
In a model of job search in the economy there is the additional possibility, similarly to robots previously, that certain tasks maybe influenced by the, more or less, plausible threat of an employer to replace the employee by training an AI system to perform the tasks. The credibility and acceptability of such threats are likely to impact wage claims and unemployment risks. All these effects do not happen instantaneously, but evolve over time with varying speed. Hence, calculations of effects have high error margins. The resulting model yields oscillations of “labor productivity, wages and unemployment with multiple steady states in the long run”.
Learning by using seems to be a good description of what occurs at the micro level (the employee) and at the macro level of an economic sector or the economy as a whole. Society may guide the use cases of AI just as much as the business case to use AI, for example in the creative industries as infringements of copyrights may occur on a massive scale. However, learning by using is not free of risks to society at large. Just like allowing people to use automotive vehicles has lead and still leads to thousands of deaths annually, learning by using produces external costs. Overall, this is another case for a benefit/cost analysis for businesses, the economy and society.
AI 2nd round effects
The most popular topic currently is AI.
Most writers, assisted by some form of AI, will deal with the 1st round effects of AI. These consist in the immediate consequence of the use of AI in office work, medical and military applications, music and all producing or creative industries. As an economist you take the input – output matrix of the economy (OECD countries) and take AI as an additional dimension of this I/O matrix, for example. The result is an AI-augmented model of the economy. This 3-dimensional cubic view of the economy asks to reflect on the potential short-term and medium-term impact of AI.
Let’s take the example of translation and editing services. AI will in the short-term or the 1st round effects make it easier to offer mechanical translations with fast turnaround. Most likely, this will lead to less translators needed for routine translations of longer texts, which would otherwise be a very costly endeavour. The 2nd round effects, however, will make the expert knowledge of translators of texts, where every word counts, more necessary in order to provide the best version of a translation targeted on specific audiences.
In the legal domain, for example, the precision of words is primordial and errors can be very costly. Hence, the 2nd round effects of AI in this field will increase the demand for high quality translation services more than before the use of AI. The important shift consists in these 2nd round effects of AI, which give a push to multilingual societies as just one medium-term outcome.
Please use AI to read (listen) to this paragraph in your native language or even dialect using your favourite AI-tool.
Hallucinations serious
There serious hallucinations by AI and there are funny hallucinations by AI. Do we want our various AI models, from time to time, to crack a serious or funny joke? Well, that’s a bit the spice of life. However, not knowing when the machine is joking and when it is serious, this is more likely to seriously disturb most of us. This reminds us of our school days were teachers were not amused some pupils not taking them seriously in their efforts to transmit information. Now we know that a good atmosphere is conducive for better learning progress. AI as teaching and learning assistance could well work best in a “fearless“ classroom. Repeating a lesson several times and at your own learning rhythm will help independent of the seriousness of your teacher. Self-directed learning with a little help by AI might do the trick for many to advance how and when they feel ready for it. Hallucinations rates are a standard test for AI models. They range from 1% to 25% of queries. This is not in itself a problem. It has become tough to find out about the 1% -2% models because you no longer expect them to give wrong information. These are the 1-2 out of a hundred of cases where we are confronted with serious hallucinations, seriously.
(Image: extract from „cum Polaroids“ from Eva & Adele, Hamburger Bahnhof, Berlin 2024-5-22)
Home extension
Most people think of home extension as some sort of extension of the roof, an additional room or the transformation of a garage into an additional room. However, the digital home requires a home extension of a different kind. In order for all rooms to be included into the digital home a range extender of your wireless might be necessary. Yes, this even includes the bathrooms, because otherwise you can no longer sing along your favourite tune under the shower if you are used to the streaming of the musical or orchestral accompaniment. Additionally, the immediate surroundings of a home with or without garden might make it necessary for your robot to mow properly or your digital letter box to send you the mail for the long awaited love letters while you out of home.
Being out of range in your home, is almost equal to not being home at all. Of course, you don’t have to automatically send an out of home message to all your contacts when you are too far away from your digital home for your digital device, but the comfort of a range extender may avoid the new “digital inequality” between adolescents in your home. Room choices are made according to wireless access points and signal strength rather than the room with the best view. Lots of new issues arise we did not even think we would have 10 years ago. Of course, we follow the suggestions of an AI chatbot that recommends the best location for us after we entered images and descriptions of the consistency of each wall into the system. Just a practical advice, install extensions out of reach of any toddler, because a sudden interruption of the connection will create very unpleasant surprises.
testing testing
Before the installation of the new AI chatbots or other agentic AI, they need profound testing. Wise statistics are quoted with the conviction: it is all about testing, testing, testing. Any systems that build on statistical reasoning (LLMs or machine learning) will behave erratically on what is known as an area with stronger impacts of, for example, statistical outliers. On both ends of the “normal distribution” of events or reasoning the statistical models and algorithms used in AI will produce “spurious” errors or have larger error margins on such topics a bit off the 95% of usual cases.
This means, testing, testing and testing again for the programmers of such AI systems before the release to the public or enterprise specific solutions. The tendency to keep costs of testing phases low compared to developing costs bears obvious risks to the “precautionary principle” applied in the European Union. Testing is most important to check the WEIRD bias of the most basic AI systems. In this sense AI development has become a sociological exercise as they have to deal with “selection bias” of many kinds that could have very expensive legal consequences.
(Image: Extract from Bassano, Jacopo: Abduction of Europa by Zeus, Odessa Museum treasures at exhibition in Berlin Gemäldegalerie 2025-5).
AI Workday
Our workdays have seen considerable changes throughout the last few days. The home office boom has allowed employees to work for extended hours from home. The there is an abundant literature on the effects of home office work on well-being or the work-life balance. Productivity gains could be reaped by employers and a better work-life balance was a lasting advantage for employees.
The increased use of AI specific to some occupations has introduced a new form of added productivity for some occupations or professions, AI as complementarity, whereas other occupations suffered a higher risk of being substituted by AI applications.
Based on time diary data, the study by Wei Jiang et al. (2025) reports that users of AI have longer work time and reduced leisure time. Competitive labor markets increase the pressure to put in even higher hours of work. Nerds, just like workaholics, are likely to be drawn into excessive hours of work with increased health risks. Enterprises and consumers appear to be gaining more than the employees, who are at a higher risk of loosing out on their work-life balance over time.
AI-assisted
We used to create and use an artificial word to shorten explications in conversations. It is easy to understand for everyone nowadays to describe an online search as “I googled something” without further concerns about the sources used in answering a question. Apple just announced that its search engine Safari was used less than expected recently. The reason might be that more and more people search online by using AI tools or Apps that directly provide answers to their queries. ChatGTP and Perplexity are 2 AI-programs that might replace the other search engines we have gotten used to over the last decades. Particularly “how to …” queries offer reasonable instructions. In Perplexity you get an answer after you specified a little bit your query and, therefore, the answer is more likely to be right to the point offering text, images and sources. In ChatGPT you enter into a dialogue to further specify your query if unsatisfied with the first response. The judgement of an authoritative response is up to you if you want to follow up. The assistance of an AI-system cares less about the source but about the speed of response that it can provide. For sensitive questions or queries with less obvious or clear answers more interactions with the tools is necessary. The pricing and our willingness or ability to pay in cash rather than through the selling of our data for advertising will determine whether we shall “chat-it” or “plex-it” in future.
Reinforcement Learning
Education scientist have embraced the change of focus form a front loaded teaching format to a focus on the learning process of pupils or students. Computer scientists have adopted this strategy in shifting from just knowledge data bases and predictions of likely next words in a sentence or paragraph to learning models. Deep Seek has surprised most large language models by its successful strategy to focus on learning and reasoning. So-called reinforcement learning is key to the programming of next generation AI models. Reasoning in most cases builds on multiple step sequences in answering a more complex question. The model then returns the answer and the steps (reasoning) applied. There is a debate whether summaries or translations of texts would need the reasoning function of AI models. Most of the time reasoning might not be necessary or even counterproductive, if the translation would try to correct an obviously faulty reasoning in a text.
Maybe, imagine also that an ordinary LLM would translate a text containing fake news. A correction loop which involves a cross-checking with reliable external sources like any encyclopedia or wikipedia would complicate the answering procedure of any text. However, this is a bit like, how the process of reinforcement learning with human feedback (RLHF) works. Reinforcement learning applies a form of accuracy reward, which guides the learning or answering process with checks against mathematical or programming accuracy. Just think of basic logic to be respected in the answer.
Similarly, a formal accuracy control checks against mathematical models and ensures the answer is returning a text with a normal sentence structure or numbering of reasoning steps, an intro and concluding phrase, like we were all asked to do in school or universities. The amount of corrections from humans is reduced quite a lot and the computing resources are also only a fraction of the previous LLMs, which are retrieving answers from enormous databases or gigantic data factories consuming lots of energy in the processing of requests. Remember the movie on Kasparov, the world chess champion, who got beaten by a computer from IBM that did not only have a huge stock of previous games and tournaments, but could make judgments on positions and promising strategies to pursue. Don’t be surprised if a DeepSeek answer is superior to what our own mind and reasoning is capable of. Reinforcement learning is a learning tool, which we also may apply, if we deem it appropriate or just as one way of coming to an answer. (useful reference: Sebastian Raschka, Building a LLM from scratch, Manning).
(Image, ChatGPT, 2 humanoid robots are thinking and discussing how to repair a notebook which is sitting on a workbench).
Hands-on AI
The use of AI in translation and to streamline texts and preparation of communication has become a common experience. The applications in medical fields are less well known. Scans and checking of skin cancer could be a game changer for many who live far away from the next medical doctor. AI assisted brain surgery is another issue, but a very specialized application. The interface of AI and robotics might be another game changer as such applications where you train the robot with for example weight lifting and transportation charges can contribute to alleviate human skeletons. Handheld devices can guide the robot and data from sensors will complement the learning of the tasks through assistance from AI. Applications are manifold and we have not even seen the most promising ones. The application potential in warfare are particularly troublesome as humans do not necessarily enter into the concern of AI-assisted weaponry. In the social sphere trust is a crucial behavioral and ethical concern. These issues AI can only learn from us. Any attempts to do without human input and control is doomed to fail. We are not indispensable yet, we are simply turning more and more into responsible, supervisory roles.
(Image: Acatech exhibition with hands-on AI applications, Berlin in collaboration with IQZ and DTM, German museum of technology.)
Blue Sky
In the period of romanticism the associations with a blue sky were very different from today. Getting out into the sun was a kind of privilege for the “leisure class”, of people who could afford to enjoy time outside for boating, walking or other pleasures. In our technology-driven 21st century the associations with Blue Sky are more like a technically enhanced view through for example “Windows” at the news and opinion platform or “Bluesky”.
Several scientific websites that report data on blue skies and air quality more generally across the world report indicators like ozone values O3, sun intensity, micro particles 2.5µm and 10µm, Nitrogen Dioxid NO2 to name the mostly quoted indicators. Hence, just enjoying the blue sky outside isn’t the same as it was before. People working outside in the sun or at times of a blue sky but with high air pollution levels are incurring severe medium-term health risks. The Ozon layers at very high altitudes protect our skin and eyes against high UV-radiation, but O3 on the ground is tough for eyes and lungs.
Technology has come with many blessings, but the negative effects on a global scale become also more evident. Getting used to a particular lifestyle, which produces lots of emissions of aggressive fine particles will make it more difficult to just simply enjoy a blue sky.
In consideration of all these background data with regional variations, we surely need an AI-system which we can ask for advice, whether we should go outside and enjoy the blue sky with or without respiratory mask.
Romanticism has led us all the way to Californication and dangerous enshittification of the air that surrounds us. Youth and the next generations will have very different associations with Blue Sky than we have the chance to, at least, have had.
Mindmap Me
Tools like artificial intelligence allow all sorts of transformations and depictions. The photo editing tools are widespread and particularly popular among the young users. My own transformative exercises, latest with www.bairbe.me, have yielded interesting insights, well worth an intergenerational playmate. For the guys there is the www.yobrick.com version for brick gamers.
The App “Canvas” allowed me to delve deeper into my own mind by giving instructions of how to create an image of the structure of the blog entries on this webpage. Of course, it is not (yet) a real AI-generated content map, but it is only a matter of time until such tools will exist. After all, this would be just an arranged and rearranged list of contents using the hyperlink structure of the texts as well.
For the time being, I derive my own structure of the blog entries by topics, categories and tags including the hyperlinks or internal referrals. Interlinkages are mostly stated explicitly. However, there are many implicit links, which are obvious to some, but not others. AI-systems could use occurrences of words, synonyms and antonyms . Colors in addition to bubble sizes and (in)direct lines may complement such mindmaps. This can help to reveal another, additional layer to connections between categories or tags. The Ai-generated image shown below was created with the APP Canvas as a first approximation and AI-augmented test version in form of abstract images).
Next steps on the way to understand human intelligence and, maybe, augment it with a next generation AI-system would use a colored-3D version of such a mindmap and use the chronological evolution of the blog posts in a kind of evolutionary animation. This should allow us to go beyond the usual psychological classification of fluid and crystallized intelligence. We might come to grips what it means to be “in a Paris state of mind” or when hallucinations become overwhelming.

Hallucinations
In the 21st century hallucinations have become a daily experience. The origins of the word can be followed back at least to the Latin verb “alucinor”, best translated with “to hallucinate”. As a verb to can conjugate it, meaning that I can do it, you can do it, s/he can do it, and we may do it in groups. Roman emperors did it, American presidents do it and, of course, AI does it. Hence, it is a great subject to study.
In “Nature” 2025 we find ways to limit hallucinations of AI systems. The strategy consists mainly in repeated queries of the same type, but from different angles. It is a bit like cubism applied to informatics. On “github.com” we can follow the rankings of AI-models using LLMs based on the “hallucination-leaderboard” developed by Vectara. On “huggingface.com” you can test the Hughes Hallucination Evaluation Model. For example it is possible to run a test of your own small text documents (just like any blog entry on this webpage) and what the AI systems will do them in an attempt to summarize your ideas. According to the “hallucination-leaderboard” we are confronted with a 1.3%-4% hallucination rate of the top 25 LLMs as AI-systems. In text based systems the quantity of “errors” is a first indicator only. The seriousness of the omission, addition of wrong information or an erroneous judgment will be left to the reader or analyst to uncover.
There is now a lot to do to test various AI-systems on their “trustworthiness” in summarizing my own work. My very own daily hallucinations have become a large data base as a test case for the capacity of LLMs to make sense of them.
Based on the series of passed blog entries I shall test the capacity of AI to predict the n+1 blog entry. It would be great to know today what I am going to write about tomorrow etc. Thanks to AI I shall have (finally) a sort of intellectual life after death (not sure whether I should want this). Enough of hallucinations and on hallucinations for now, back to serious readings or fictionalized science. (Image: extract from Delphine Diallo, Kush, 2024 at Hangar Gallery Brussels).
Virtual author
« La Mort de l’auteur « . In a radical sense Roland Barthes was the first to proclaim the death of author as the sole master or mastermind of a text or speech. In fact there are many more on whose direct or indirect contributions a text is based on. However, biographical accounts of an author can only enlighten some (minor) aspects of the influences on the author and the final version of a text, (l’écriture), and the reader (lecteur). In « Le degre zero » the analysis of the different styles of Flaubert and Proust are extolled ( p. 131-139). Flaubert is characterized as the author with infinite corrections on the same texts and sources. It appears today as an endless loop of an algorithm where the stopping rule is not properly defined or implemented. Much in line with the « tabula gratulatoria » of Barthes (p. 279 of Fragments…, see image below) some AI systems return fake versions of a bibliography some readers will rely on.
In the 21st century l’écriture has become almost inexistant without the technical support of machines, but most of all artificial intelligence. The author is dead, long live the virtual author. The assistance of spelling, grammar and style editing from software programs has widened the spectrum of coauthoring. Editors take more influence through pushing marketing potentials of authors and their writings. Based on previous manuscripts and publications it is possible to produce hallucinations of an author whereby only the author might be able to identify the virtual authorship. At best AI generates first drafts, but similar to the linguist of Barthes, AI is likely to become another brother or sister of l’ecrivain (p. 139).
The thrust of Barthes is to highlight that there is more to a text than just the version at hand. In fact there are texts in a text or multiple versions or layers of a text. « L’enjeu de l’analyse structurale n’est pas la vérité du texte mais son pluriel » (1972, par ou commencer p.148). In conclusion, almost 50 years after the death of the author we currently witness the miraculous rebirth of the virtual author as the original deus ex machina which we always feared. Don’t worry it’s just another sibling of the original authors.
Game Tech
Gaming has moved digital and online for a long time. Networked gaming and following the best gamers online on video-platforms like twitch has captured a lot of attention from younger generations. With a real trend of gamification for industry and even public services, the digital gaming sector is moving from backstage to front end of companies and services. Public administration as a game. Enjoy the use of a public service through a game-like experience. Walk around in the metaverse world and get your admin work done. I would love to have such an experience. There are so many applications for gamification that the bottleneck is in the human resources to program all these applications. Coding the digital and virtual worlds to get real stuff done is just around the corner. The SCCON 2024 in Berlin showed these technologies next to each other. 2025 we might see integrated prototypes. I have a digital dream, others call it a vision for public services in the 21st century.
Language Tech
Inclusive societies can build on many tools including AI to lower language barriers. It is not only a question of translation, but many other forms of language come to mind. Sign language or easy language are necessary to facilitate broader access to public services. Reading out texts on webpages or Braille translation for the blind to interact through keyboards are additional forms that are available in digital communication as well. The audio description of videos and images is well advanced (reverse engineered through AI) and allows people with limited vision to fully participate in society. Audio messaging and transcription are used by almost everyone by now. Public services will open up to these channels of communication as well. The technology around languages is much more than just translation and AI-assisted learning of languages (talkpal for example). The new lingua franca is language technology, because it enables us to speak many languages at the same time even dialects or lost languages and in many voices. (Image: Extract of Josef Scharl, the newspaper reader, 1935, Neue Nationalgalerie Berlin)

Justice Tech
The digital or hybrid courtroom has become more the rule than the exception in Germany. Video conference equipment reduces costs and can speed judicial processes. Even the production of transcripts from the proceedings and circulation of documents and certificates, enhanced by AI will change the speed to exercise justice. Digital tools and technology has found its way into the courtroom and younger lawyers and judges as well as the accused or defendants will value the simplified procedures. Until this is the standard in all legal domains we shall have to wait a bit. In Germany 2026 is the deadline to install the adequate equipment and tech companies and consulting firms like Arktis are well prepared to support this overdue process. In terms of an economic theory of the judicial process a judgement that is delivered years later has to apply a discount rate of at least equal to annual inflation. For moral issues another discount rte might apply. Excessive delays of judgments may cause additional suffering on the side of victims. Justice Tech, therefore, has a role to play in the practical and theoretical debate about „doing justice“. (Image SCCON Berlin 2024-10)

AI Nobel
Artificial Intelligence has made it into the ranks of Nobel prizes in 2024. As AI is been talked about everywhere by now the Nobel Committee has deemed it expedient to award Hopfield and Hinton the Nobel Prize not in informatics, this does not exist (yet), but in physics. Neural networks focus on the links between bits of information rather than just the sheer number of data points mimics the functioning of our brains. The most remarkable statement by Hinton is probably the one of the also dangerous potential of this invention. He has already issued a disclaimer on the potential of AI in warfare or other ways to impinge on our human idea of freedoms. The discovery of the applications, good and bad, of these AI-based systems has just begun. The comparison with Nobel‘s original discovery and mass production of explosives from which the prize draws its name has hardly been more adequate recently. (Image stockholm City Library)

Artists Robots
We know that the scientific and artistic dealings with robots have a long tradition. Whereas art of impressionism took up the challenge to paint the world outside the studio and embellished technological achievements like bridges and trains post hoc, modern extensions of science fiction to the world of robotics has extrapolated from the present. Artists became forerunners of technical evolution and thereby contributed to the acceptance of artificial intelligence to broader audiences. In 2018 The “Grand Palais” in Paris hosted an exhibition on “Artists & Robots” (Pdf booklet). Jérôme Neuters contributed an essay to the catalog of the exhibition on “L’imagination artificielle” which identified a additional role for artists in combination with AI. Some of the early adopters of the new possibilities of robots assisting artists, Nicolas Schoeffer is quoted to state: “l’artiste ne crée plus une oeuvre, il crée la création”. Like an invention of painting techniques or light or perspective in painting, robots allow a new way of representation of emotions or space. (Image Manfred Mohr, 1974 video Cubic Limit, Artists & Robots p.92-93)

Broken Promises
In a library catalogue, the entry of « broken promises « returns more than 3000 times that the title has been used. « Promises kept » is almost as popular. A rapid inspection of titles reveals that the former titles suggest more factual analyses, whereas the latter is frequently used in the form of an imperative in combination with “should be kept”. The book by Fritz Bartel “The Triumph of Boken Promises …” (2022) demonstrates the importance of the concept of broken promises in the social sciences. The rivalry between socialism, capitalism and the rise of neoliberalism is strongly influenced by the way they handle the breaking of promises made to their respective societies. The promises of increasing wealth and wellbeing have been part of all political regimes. To keep these promises is a completely different story. Especially since the first and second oil crises and many other kinds of crises, it has become much harder to keep these promises. Working hours, retirement ages or minimum wages are all at risk to no longer live up to the promises made in earlier periods. This has put welfare states under pressure that millions of voters perceive politics as a “game” of broken promises. Socialist political regimes like Russia are ready to use physical violence to shut up people that remind leaders of these broken promises. In democracies the ballot box is often used to sanction governments that do not live up to expectations of previous promises. A lot is about public infrastructure which is failing people. Migration, education, social and labor reforms are on top of the political agenda if it comes down to broken promises. The elections of the European Parliament gave many a chance to express their discontent about various broken promises. Maybe democracy is better in providing forms of letting off steam early and protracted protests rather than the Russian way to suppress any critical analysis, let alone opposition movements. Just like the move from industrial production to services as production models, with AI we are likely to see similar problems and probably also broken promises. The challenge is huge and promises should be made with an eye of what promises could be kept.
