AI data Input

If you ever wondered where the information from AI and AI chatbots comes from, you will not be surprised that this webpage schoemann.org is regularly solicited for such purposes. The number of crawlers, that do so, is quite large. The ability to trace what exactly they are harvesting on your website, is quite a tricky issue. At least a basic awareness of how the internet has been transformed in the last few years becomes evident through the comparison of unique visits, many through search engines like Google search or others, with the amount of contacts by AI-associated crawlers (see slide from own webpage below).
During he last month up to 2026-4-27 there were about 75.000 contacts, compared to 93.000 during the previous month.
At first sight, AI chatbots have largely outnumbered the “personal visits” of my webpage (see evaluate web analytics). On the other hand, I have no information of how many visits are, (at least potentially) re-directed hints from AI chatbots to my content.
In terms of “traffic” for a webpage, the information of how the AI-driven or AI-assisted search operates with other persons’ contributions will be the challenge of the coming years. If AI chatbots had to pay 10 cents per visit, I would have a comfortable pay every month from this content use. The issue of AI paying for access to reliable and high quality content has to be dealt with sooner rather than later. You may prompt a chatbot on this issue.
Meanwhile: My New Book on AI is out Now 2026-4-28:
AI and Social Science: Potentials versus Limitations” by Dr. Klaus Schoemann, online reading and free download (here) before implementation of Paywall later on.

Agentic AI Gardening

The use of AI is probably most popular for professional purposes as efficiency and economic productivity are major concerns in these fields of applications. Another whole lot of applications is rapidly developing as well, which is Agentic AI in hobbies like gardening. The use of IT in gardening has previously been reserved to landscape designers and maybe urban to rural planners. Cheap access to AI on a test basis or within your browser has widened access to computer and AI assistance for gardening purposes. Colorful designs and selection of species to enrich biodiversity are widely available now. The next step is, of course, agentic use of AI. If we have a sufficient number of sensors installed (and use weather forecast data as well), the data from the garden can easily be analyzed by AI and the mower or water pump can get going to do the job for us. This is not rocket science but only sensors, data and a couple of “if …, then” commands. The kind of pleasure will have changed accordingly shifting from the watering of plants to the satisfaction of successful programming. No value judgement here. The latter option has, however, a considerable business potential of almost industrial or agro-economic scale. 

AI Motion Sculpture

At the Festival Noûs in Paris, the collaboration of AI with artists was a major event. Based on the huge collections of the BNF in form of data bases it is possible to join the 3 worlds of library conservation, technological innovation like AI and the imagery of artists. In the preparation of the exhibits and the parallel documentation of the genesis of the exhibits of the artists, the creative potential and process becomes more evident and understandable to broader audiences. The exhibit by Tobias Gremmler, Anatomy of Motion (2026 see below), captures the motion of a dancing body in a sculpture based on a 3D printing of a series of images blended into each other. with a fast photography camera, known from sports images previously, the dynamics of a motion become a tangible sculpture. The intriguing new form is in fact a motion that has been captured or has cristalized or materialized in a permanent fashion. New technologies and materials enter into art as they offer new ways of expression as well. The collection of art and documentation centers shall enter into new phases as well. (Image: Tobias Gremmler, Anatomy of Motion (2026) at BNF 2026-4).

AI and Social sycophancy

The study by Myra Chen et al. (2026) on the practical use of various AI tools demonstrates the risks of social sycophancy of these models. Maybe a large part of the initial success of AI models exactly due to sycophancy i. e. the people-pleasing, flattering and affirmative bias of these models. If users of AI just receive predominantly confirmations and reassurance of their intended behavior, they shall be less inclined to accept more outright criticism in normal interactions with real people. The more you receive flattering responses by some people, the more likely they have used AI in preparing themselves for a response. The rigorous psychological tests applied in the paper can in fact explain a large part of why we are likely to become addicted to the always flattering responses from the current versions of AI. Only the scientists will consciously seek for disapproval of their beliefs and keep challenging the AI-provided returns. Even using different AI models did not change the affirmation bias. Maybe programming a “grumpy old professor AI” as an alternative could do the trick. I shall have to think seriously about this as the alternative to current models. The critical AI is most likely not a viable business opportunity, but it might survive many other sycophantic AI unicorns. (Image: waist coat 18th century, Paris exhibit Musée de la mode 2026). 

Master AI

In 2025 the exhibition “Cartooning for Peace” at the BNF in Paris had already an exhibit authored by Stellina Chen from Taiwan, which summarized the evolution and projected the consequences of an all encompassing AI revolution (Image below taken at exhibition 2025 BNF). Currently we exercise ourselves in using various forms of AI or learn how to program them ourselves. It is our aim to master the new technology so it becomes a helpful tool. However, there are already many instances where it is no longer us mastering AI, but the AI has turned around the table and has started to master us. The applications of AI have entered our work tasks, tries and frequently succeeds in improvements of our routines and processes.
In private life a similar revolution is happening, when AI offers advice, which is hard not to follow and very convincing most of the time. Since getting involved in a conversion with AI tests your logic and debating competences, we find ourselves more and more in situations where AI is telling us what to do in the best of a convincing manner. After centuries of humanity to find freedom from oppression and the freedom to what we want ourselves, we seem to be ready to hand over control to AI. We are just like toddlers in this respect, willing or obliged to follow our master.

Polychrome tree

Maybe it is just a matter of taste whether you prefer a tree polychrome i. e. in full colors in spring or more monochrome during winter in almost black against a white background. Others might argue that it is not a singular version or time of the tree’s growth cycle, but the steady change. In any case the same tree never looks the same before and after rain. The only certainty is “the times are a changing”, so do our preferences. They are changing as well from time to time. Few persons have similar preferences over the life course and business, marketing and societal changes drive such changes. Often we hardly notice them. Trees are a perfect point of reference to check your personal preferences. Our smartphones track our preferences just by analyzing frequencies of photos taken over months or years. They have a very differentiated polychrome view of us. The reasons to take ugly pictures might confuse the AI-assisted exploitation of our polychrome or monochrome preferences. 

Bob the AI-enhanced builder

Most kids today and GenZ youth have come across the TV-series “Bob the builder”. Baby boomer parents have been worried about the work ethos which might be the hidden agenda of the videos. In 2026 we can now draft a new episode called “Bob the AI-builder”. Many episodes could be re-written when Bob and his team have access and get training with AI toolboxes. The study published by ActivTrak (2026-3-11) reports that companies make on average use of 7+ different AI-tools, up from 2 in 2023. This constitutes a hint that complexity at work is increasing as each tool has to be managed and the boundaries of its use need to be respected. As most search engines offer an AI-short cut to search it is not surprising that now 80% of the workforce use some form of AI in 2026. The productivity increases in quantitative terms as more output can be achieved in the same time or slightly shorter work days. However, workload is moved even more to weekends now.
The upcoming challenge through AI-tools is the reduced “the AI users’ focus time”, which suffered 9% compared to non-users. For Bob the AI-enhanced builder this means “AI is being used as an additional productivity layer, not a substitute for existing work”. The overall workload is not reduced by AI. The intensity of work increased between 2023-2025.
There is still a puzzle in the data. Multitasking (+12%) and collaboration (+34%) both increased, but the duration of an average focused session and focus efficiency dropped. The challenges for employees increase. Handling simultaneous processes and keeping an open mind to collaboration are key competences for Bob the AI-enhanced builder.
(Image: LEGO-shop in Paris 2026-2)

Retrieval-augmented AI

As a scientist it is in our DNA to cite other scholar’s work with precision. As a university professor your job is to check the quality of citations, kinds of citations and accuracy as a regular part of your job, also as supervisor of junior scientists. In 2026, the use of up-to-date AI (Asai et al. 2026, OpenScholar AI) allows not only to summarise large bodies of scientific literature, but also to cite references and even quotes from the paper(s). Literature reviews used to take months to compile. AI can speed up the process enormously. The citations can be ordered following an own logic or an AI-suggested logic.
It has become much harder to evaluate the degree of innovation of a candidate for a scientific degree. Tools like retrieval-augmented Language Models enhance the scientific potential of generative AI since they extract more or less short citations directly from the original source just next to the original based on a simple query of author and approximate subject (see screenshot below of own previous publication).
The good news is: (1) referral to previous research and citations should become faster with improved tools for verification. (2) You will find papers written by yourself that you no longer have in your own archive.
The bad news is: (1)self-citations of researchers might become more feasible, although this problem is conditional on a researcher’s seniority. (2) so far, Language models prioritise specific languages (although not necessarily) and differentiate names with “foreign” characters e.g. “ö,ä,é” and do not double check “close neighbours” of them like “o, oe, a, ae, ue, e, ê, è” leading to a “character based normalisation bias“.
It is, of course, rather easy to point out deficiencies of the search, sorting and inclusion algorithm if you know already about the complete picture of a data set. 

Future Conflicts

Since 2014-2-27 Russia has occupied the Crimean peninsula. This invasion had started with an undercover mission of unmarked soldiers to take full control of Crimea about 3 weeks later. Russia did not officially declare a war, although the intentions were identical to a land grabbing war. The western world did not react much to this violation of international law. Apparently, this contributed to the next cynical “special operation” by the Russian army to fully invade Ukraine on 2022-2-24 in a failed “Blitzkrieg”, a rapid invasion, which attempted and failed to annex the whole of Ukraine. According to Lissner & Warden (2026) the Russian invasion of Ukraine bears 4 lessons for future conflicts: (1) the risk of using nuclear weapons is real, (2) in addition to nuclear options, prolonged and very destructive conventional wars remain an option, (3) escalation thresholds emerge and evolve over the duration of the conflict, (4) allies and partners in war keep adjusting their risk tolerance as well as escalation options. The authors argue from a US perspective and add a practical comment: “The USA cannot go this alone, but should coordinate closely with allies and partners in time before another conflict arises. Multilateralism seems a valid option and even more so as we move into a multipolar power play on the global scale propelled by AI.
(Image: Musée Orsay, Paris – Archer, Bogenspannende)

Future of work

The beginning of the 3rd millennium has brought about several fundamental changes of work and employment. What had previously been thought of as utopian in the realm of work, has become a normal feature of work. Just like in the historically grounded, utopian perspective described by Bernard Gazier “Tous sublimes” (2003) we have a growing group of employees and self-employed persons who enjoy privileged positions on the labor market with sufficiently high salaries and access to mobility on the labor market at their own discretion. In addition to these examples described in Gazier’s utopian perspective, the 2020s added permission of remote work from anywhere and use of AI-assisted technology and robotics. A previously utopian view of the future of work has become a reality for many more people nowadays. The utopian element no longer is the how this world of work might look like, but how many people will enjoy the benefits of the technological progress. With a substantial increase of the efficiency and productivity of work, the distribution and sharing of the fruits shall become even more important. We have entered into a new phase of “the brave new world” of work as of 2025. (Image: Graffiti Berlin 2025-12).

Robotics Hype 2026

Towards the end of 2025, it is common practice to look back on the last 12 months to summarize a year and to contribute to the “collective memory” of the year. From a “society and technology” perspective we shall not be surprised if such summaries will be full of images and praise of AI and robotics. However, large parts of the innovations that shall be declared to have marked 2025 were already around 10 years ago. It is just the timing for the new momentum and the creation of a hype around these technologies that is really remarkable (compare WSJ 2025-11-24 p 1-2 by Konrad Putzier).
It is true, playing around with robotics was reserved to universities, research institutes and some big players in industry. The public and financial markets showed little interest in these “nerdy” fields of applications. Although we were hardly able to compete with our chess computers, Watson solving math problems for us including the steps for us to follow. Video, image and textual support was provided by specialized applications already at high levels and in multilingual versions. In 2025 these techniques have enhanced with machine learning and neural network programming reaching higher speed and being able to use ever larger data sets as input.
But there are areas where the hype is coming to an end. How about all the artificial reality (AR), virtual reality (VR) applications? Many have seized to exist. Have you visited or invested in “Second Life” platforms? Opened a shop in the VR-world? Bitcoins have lost 7% of their value between 1.1.2025 and 24.11.2025 and they suffer still from high volatility rather than an uninterrupted rise.
War has fuelled the rise of shares in 2025 and “dual-use” technology benefits as well. AI has been driven by, and drives both trends.
In sum, it is much less the technological innovations in 2025 that are astonishing, but the political economy of how to orchestrate a sensational hype around the technologies.
(Image Hannover Industry Fair 2016-3-14).

EU Digital Sovereignty

If we try to search for digital solutions, we shall encounter a whole lot of American and Chinese products, but very few European companies that are able or willing to compete. Hardware mainly comes from China, software from the US, at least until AI was not working in the background. If we add Russian interference to destabilize our digital infrastructure to the scenario, we are not really fit for the challenges of the 21st century. The very definition of a country or political union is the affirmation and competence to assure its sovereignty, particularly in cases of territorial conflicts with neighboring countries. My health or mobility data are a rather private affair, however, our state governments in EU-Europe have done little to ensure our data integrity. Business is also at a loss, if they do not spend heavily on data security themselves, usually relying on external cooperation. 

The EU digital sovereignty summit took place in Berlin on the EUREF campus in 2025. It can only constitute a beginning for intensified cooperation in  this long overlooked policy area. It will be tough to catch up where production has been abandoned for decades.  

From AI to xAI

As humans, we like the feeling to be in control of things. This applies even to immaterial things like religious beliefs. Generative AI has created problems with its hidden structures and lack of transparency of their applications of algorithms (and combinations of algorithms) to basic data bases of knowledge and information. The use of xAI, which stands for explainable artificial intelligence, can address some of the concerns about the lack of transparency and explanation of responses from AI systems. Many users want to know in advance about the consequences of the use of specific words or notions in an instruction to AI. The interpretation of each single word by xAI can inform about the precision of interpretation (cheap versus cheapest, for example) or highlight the sensitivity to gender-neutral language or not in its guidelines. Additionally, ex post the xAI could indicate alternative notions in a prompt and, briefly, how this would affect results.
Yes, there is a trade-off between brevity of answer and room for explanations. As in psychology, there some value in a “thinking aloud” procedure for respondents in order to better understand (implicit) the reasoning behind a reply. xAI takes us a step further in this direction of asking AI to think aloud or more explicitly in a human compatible way of logic and broader reasoning.
Put AI on the psychotherapist’s bench and xAI will be to the advantage of many more humans again. Humans just don’t like black box systems that lack the necessary as well as sufficient transparency. (Image on the right: Patrick Jouin, chaise solide C2, MAD digital humanism).

Deus ex machina

The term “deus ex machina” used to be applied more in its figurative meaning. With the rise of digital tools like chatbots, facilitated and enhanced through AI, God is speaking to us not only in multiple languages, but also from our pockets through our smartphones and headsets. This is a rather recent form of “deus ex machina”, which we did not expect some years ago. The bible as e-book or pdf-file has been around for some decades, but only more recently we can enter conversations with God through chatbots as another version of “deus ex machina “ about almost everything (and pay for it via digital credit card). Programming of such an AI-tool is easily achieved. AI will prepare a weekly or daily sermon or prayer for you, following your predilections of your favourite quotes of the bible. An interesting twist to the programming is to use authorized as well as unauthorized translations of the bible across several centuries.
Another interesting enlargement of the input data base is the inclusion of interpretations and discussions not only within your own religious community, but beyond. Maybe the discussion of several different religious chatbots with each other could prevent aggressions due to differences in basic beliefs. These “dei ex machina” might further our understanding of what makes us humans different from machines and machine-based solutions of human conflicts.
As genetic clones of ourselves have become already technically more feasible, our digital alter-egos (the comprehensive collection of traces in the internet and digital images, plus social scoring) help to empower those “dei ex machina”.
This kind of “Brave New World” asks us to be rather brave ourselves.
(Image: interior St Denis Basilique Cathedral Paris 2024)

Chatbot Me We

In order to dig deeper into the functioning of AI, I deemed it expedient to construct, for example, a simple chatbot on a limited knowledge base from my own writings on AI (link to reader in previous blog entry here).
A toolbox from Google offers powerful assistance in such an endeavour. The outcome uses only my input text and no other sources. It is dynamic in the sense that it interprets questions and searches within the text file provided only. The answers are edited with a LLM (large language model) and provide flawless English texts. You can try it here using catchat as magic formula and Google account so far.
With a bit of programming knowledge (htlm, python, Java) and related learning sites it is feasible to come up with a “static” chatbot hosted at a free of charge provider as well. For learning purposes this step by step building and coding of a chatbot is helpful. The outcome is rather limited or requires a lot of time to increase the scope of Q & A interactions and to move from a static (predefined Q & As) to dynamic ones.
Full control of answers, excluding any hallucinations and high-speed replies, come at a cost. Take a look here. It is a very basic version so far, just to get the idea of it. full web address:
https://schoemannchatbot.eu.pythonanywhere.com/

Chatbot Me

Chatbots are helpful to allow queries to larger data sets like the blog entries here. So here is a try of a Chatbot to query all entries on AI using ChatGPT to create a Chatbot that uses and references it source from www.schoemann.org/tag/ai and the AI reader in pdf-format.
Please send me an email if the hallucinations of this Chatbot 1.0 on AI from a social science perspective are giving strange results. I’ll get back to you. Please use at your own risk as I cannot guarantee for all answers. The usual disclaimer applies here.

ChatGPT proposed the following set of Questions and Answers on the blog for an entry into the chat: Example Q&A with the chatbot

Q: What are the social science concerns with AI?
A: Bias in results, job shifts, democracy risks, privacy, and new inequalities.

Q: What does the text say about reinforcement learning?
A: It’s seen as the next step for AI: focusing on learning and reasoning, not just predicting text. It also uses fewer resources.

Q: How are robots described in the document?
A: Robots are mostly assistants. They can follow people or carry small items, but more complex tasks need sensors and AI training.

Q: What about biased results?
A: Studies can be misleading if control groups are flawed. AI faces the same challenge — social scientists warn: “handle with care”.

Q: What is Schoemann’s blog view on AI?
A: He links AI to energy use, fairness, and its role in the “all-electric society” — stressing efficiency and responsibility.

More on the chatbot (in testing phase) and the Link to the coding help received from ChatGPT on this mini-test-project :
https://chatgpt.com/share/68c1d160-0cc0-8003-bf04-991b9e7c3b24

 

AI Podcasting Me

Content producers have lots of tools at their disposal to get their content across to very different audiences. For some time the traditional media of newspapers, radio and TV were the prime outlets for content distribution. Social media have changed this to many more senders of content than before.
In the 21st century, AI allows to automate media productions. In a trial run I just used Google’s NetbookLM to generate 3 podcasts based on my own writings on AI over more than a year by now. The result is available and using artificial voices it is possible to broadcast yourself without revealing your own personal voice. I am not done with the evaluation of the outcome(s) yet, but the first impression is an interesting other form to spread content.
More tests are necessary to check for hallucinations as well.
Here are the links to my virtual podcasts:
AI, intimacy and insecurity

AI, Society and the Human Spirit

AI and the Human Mosaic: Navigating Our Interconnected Future

Video Doku by AI

Based on my own blog on this webpage “schoemann.org” Google NotebookLM creates a video of about 7 minutes. Using Microsoft Clipchamp automatic subtitles with a slightly different storyline are produced based on the video data. In the end, the blog entries are re-modelled into something like a lecture on “AI in a wider social context” (see and play below). No voice layover so far, read by yourselves. A podcast format is another option.
It feels like walking across landscapes in my own mind. Content creators of today or the past never imagined the impact they might have through the powerful tools of AI. The only caveat, jokes I incorporated into the texts cannot really be handled by AI tools unless they are explicitly designated as such. These AI tools take me much more seriously as I do myself. This is serious.

Mind Map Me

AI tools are great to assist learners in the task to get more structure into larger documents or books. It is up to the teachers or lecturers to use the tools themselves to pre-structure content they want other persons to learn. Mind maps are useful to summarise larger content and offer a tree-like structure to a text moving from the general to more specific content and then into details by at the same time not loosing sight of the overall structure of the content. Basics can be provided by Google’s NotebookLM and you may rework this basic structure yourself linking the mind map to the detailed content. Learning may start with a comprehensive mind map at the beginning to move on to details. Alternative versions of a mind map are equally feasible to come up with new combinations of subjects. This can be done using the tags of the blog entries in addition to the categories and fast search keywords.
It is a fascinating way to mind map yourself based on longer texts written by yourself. This clarifies a bit what potential readers or learners are likely to retain from reading your document(s) or blog entries.
Its worth trying, as others will most likely use these tools from now onwards anyway, whether we like it or not.
Image: Google NotebookLM Mind Map of my AI blog entries 2025-9-9.

Own AI related entries

The digital drafting of blog posts facilitates the publishing of volumes based on a selection of entries. Pdf-files, or EPUB formats are quite common. The choice of a flipbook is more suitable for larger screens of tablets or notebooks. It also serves as a rapid stock taking of entries and missing topics to cover a broader field. In some fields AI is not yet popular, for example in agentic home applications, whereas in other fields text, translation or image production AI has grown into a threat to professional fields. Even computer languages are a major field of application with substantial negative employment effects.
Hence, enjoy the latest comments on AI towards the end of my latest Flipbook.
Just click here.

ChatGPT provides a summary of my AI-reader as follows:

From the uploaded text AI from a Social Science Perspective (Dr. Klaus Schoemann), social science concerns with AI focus on potentials and limitations of how AI interacts with society:

  • Labour and employment: AI impacts jobs, but adjustment processes are slow. For example, there’s a shortage of engineers and care workers, while robots are still limited to simple assistant tasks. This raises concerns about who benefits and who is left behind.
  • Bias and fairness: The book highlights issues of biased results in research and decision-making. When AI systems rely on incomplete or selective samples, results can be misleading — a challenge also familiar to social scientists.
  • Democracy and accountability: Journalists and scientists are framed as part of a “checks and balances” system. AI adds complexity here, as the fast spread of information without accountability can weaken democratic processes.
  • Human–machine relations: Topics like intimacy, housing, and memory are re-thought in the context of AI and digitalization, raising questions about privacy, social norms, and cultural heritage.
  • War and security: AI-driven technologies like drones change the nature of warfare, raising ethical and political questions.

On his blog, Schoemann also stresses the energy and resource costs of large AI models and how reinforcement learning offers a more efficient alternative. He warns that AI should be understood not just technically, but as part of an “all-electric society” where social sciences help assess risks, inequalities, and long-term consequences.

👉 In short: AI raises social science concerns about equity, bias, democracy, labour, cultural shifts, and global risks.

Marc Aurel AI

In the 21st century it is possible to chat with Marcus Aurelius. Part of the exhibition at the Simeonstift is a chatbot you may freely consult and questions with or about Marc Aurel. Based on your questions the animated screen image of Marc Aurel will reply based on his own writings like the Meditations and (probably) other secondary literature on Marc Aurel. Questions about feminism or slavery are answered based on the original texts. Some of these answers  appeared rather modern like the basic equality of all including women or slaves. The Meditations are an idealistic vision of mankind in the stoic tradition. In practice such ideals have proven very ambitious for the many and growing temptations in the day-to-day lives of ordinary people including their political, religious, business and military leaders. The AI is confronted with the issue to give answers to ethical questions which refer to the time of the author, but not all can apply to today’s ethical standards and basic human rights. Reading the original source, therefore, remains the preferred choice. 

AI earnings effects

In the first few years of wider adoption of AI in an economy, there is the expectation that this might lead to substantial productivity gains for enterprises which use it as well as for employees who are early adopters of the relatively new technology. The study by the Stanford Digital Economy Lab by  Chen, Chandar and Brynjolfsson (2025) showed that so far there are no significant earnings effects for employees. Based on millions of recent payroll data from US companies productivity gains have not trickled through to the paycheck in terms of monthly salaries. Participation of staff in a company’s overall turnover or profit might change this as time evolves. For civil servants the adoption of AI might mean increases in cases dealt with as some tasks can be executes faster than before with the use of AI.
The evidence points to employment effects of AI rather than earnings effects so far. A hypothesis is yet unresolved: senior employees using AI might employ fewer junior workers at entry positions, if these “hallucinating” young professionals can be replaced by hallucinating AI. In science the hallucination has sometimes lead to disruptive new approaches and findings. It is a tough choice to pick the young entrants with high productivity potential and eventually high remuneration for this in terms of labor earnings.

AI employment effects

The first robust empirical evidence about employment effects of AI in the USA has been published by the Stanford Digital Economy Lab by  Chen, Chandar and Brynjolfsson (2025). A previous paper by Wang and Wang (2025) highlighted the comparative advantage of persons who use AI in their work compared to others and the authors coined the term “learning by using technology”.  The prediction of the model was that there might be job losses of more than 20% in the long run and half of this already in the first 5 years of the introduction of the technology. The Stanford economists have estimated with real world data these effects in the USA and find quite surprisingly that the negative employment effects of AI have the strongest impact on young labor market entrants with few years of labor market experience. Middle-aged and more senior employees seem to benefit from “tacit knowledge” about the work, which is more difficult to replace with AI, at least for the time being of the early days of AI. This evidence is based on recent payroll data from the largest payroll processing firm “ADP” in the USA which has firms overrepresented from the manufacturing and services industries as reported in another paper  (Firm size maybe another source of bias).  However, the effect that youth 22-25 years of age suffered the most calls into question the common belief that older workers are more likely to suffer the consequences as during in the rise of the digital economy around the year 2000. (AI Image created with Canva)

Scienceploitation

Science can be exploited to make unjustified profits from referring incorrectly to it. Social sciences, like economics may be used by banks to sell you products that refer to science only as part of their arguments if the science based inference fits their purpose. Scienceploitation is very common in the field of para-medicine and para-pharmaceutical products. Health promises sell. The time until an ineffective treatment reveals the unrealistic promise to be unachievable considerable profits have accumulated on the side of the selling company. Science has a hard time to counter the perils of scienceploitation. Advanced knowledge can be used and abused as any other method of convincing people to buy or subscribe to a product. The responsibility of the scientific community consists also in finding ever new ways to counter scienceploitation. AI will pose additional challenges as well as opportunities.  

Bench the benchmarks

In the social sciences as well as in engineering it is common practice to use benchmarks as indicators of performance. Thereby, several countries or regions within a country are compared with respect to quantitative indicator. Let’s take employment ratios. A higher employment ratio, which includes many persons working few hours in part-time work, is different from a slightly lower employment ratio, but hardly any part-time employees.
The same rationale holds true for benchmarks of AI systems or the newer versions of agentic AI that are under construction in many fields. The paper by Yuxuan Zhu et al. (2025) proposes the ABC (agentic behavior checklist) for agentic AI developers. The reporting of benchmarks by such models should include (1) transparency and validity, (2) Mitigation efforts of limitations and (3) result interpretation using statistical significance measures and interpretation guidelines.
The aim of this research is to establish a good practice in establishing benchmarks in the field of agentic AI. The sets of criteria to test for is large and the focus of how the agentic AI treats, for example, statistical outliers much above or below the average i.e. (> 2 standard deviations from the average) assuming a normal distribution, is one case of application only.
We welcome the efforts to bench the benchmarks in the field of AI as is common practice in other sciences as well.

Learning by using

Is learning by using different from learning by doing? In an economic model to test the employment/unemployment impact of AI in the USA, Wang & Wong (2025) suggest an important impact of employees’ productivity due to learning by using AI. In terms of the traditional language of economics the employees who use AI in their work shall have comparative advantage to those who don’t.
In a model of job search in the economy there is the additional possibility, similarly to robots previously, that certain tasks maybe influenced by the, more or less, plausible threat of an employer to replace the employee by training an AI system to perform the tasks. The credibility and acceptability of such threats are likely to impact wage claims and unemployment risks. All these effects do not happen instantaneously, but evolve over time with varying speed. Hence, calculations of effects have high error margins. The resulting model yields oscillations of “labor productivity, wages and unemployment with multiple steady states in the long run”.
Learning by using seems to be a good description of what occurs at the micro level (the employee) and at the macro level of an economic sector or the economy as a whole. Society may guide the use cases of AI just as much as the business case to use AI, for example in the creative industries as infringements of copyrights may occur on a massive scale. However, learning by using is not free of risks to society at large. Just like allowing people to use automotive vehicles has lead and still leads to thousands of deaths annually, learning by using produces external costs. Overall, this is another case for a benefit/cost analysis for businesses, the economy and society.

AI 2nd round effects

The most popular topic currently is AI.
Most writers, assisted by some form of AI, will deal with the 1st round effects of AI. These consist in the immediate consequence of the use of AI in office work, medical and military applications, music and all producing or creative industries. As an economist you take the input – output matrix of the economy (OECD countries) and take AI as an additional dimension of this I/O matrix, for example. The result is an AI-augmented model of the economy. This 3-dimensional cubic view of the economy asks to reflect on the potential short-term and medium-term impact of AI.
Let’s take the example of translation and editing services. AI will in the short-term or the 1st round effects make it easier to offer mechanical translations with fast turnaround. Most likely, this will lead to less translators needed for routine translations of longer texts, which would otherwise be a very costly endeavour. The 2nd round effects, however, will make the expert knowledge of translators of texts, where every word counts, more necessary in order to provide the best version of a translation targeted on specific audiences.
In the legal domain, for example, the precision of words is primordial and errors can be very costly. Hence, the 2nd round effects of AI in this field will increase the demand for high quality translation services more than before the use of AI. The important shift consists in these 2nd round effects of AI, which give a push to multilingual societies as just one medium-term outcome.
Please use AI to read (listen) to this paragraph in your native language or even dialect using your favourite AI-tool.

Hallucinations serious

There serious hallucinations by AI and there are funny hallucinations by AI. Do we want our various AI models, from time to time, to crack a serious or funny joke? Well, that’s a bit the spice of life. However, not knowing when the machine is joking and when it is serious, this is more likely to seriously disturb most of us. This reminds us of our school days were teachers were not amused some pupils not taking them seriously in their efforts to transmit information. Now we know that a good atmosphere is conducive for better learning progress. AI as teaching and learning assistance could well work best in a “fearless“ classroom. Repeating a lesson several times and at your own learning rhythm will help independent of the seriousness of your teacher. Self-directed learning with a little help by AI might do the trick for many to advance how and when they feel ready for it. Hallucinations rates are a standard test for AI models. They range from 1% to 25% of queries.  This is not in itself a problem. It has become tough to find out about the 1% -2% models because you no longer expect them to give wrong information. These are the 1-2 out of a hundred of cases where we are confronted with serious hallucinations, seriously.
(Image: extract from „cum Polaroids“ from Eva & Adele, Hamburger Bahnhof, Berlin 2024-5-22)

Home extension

Most people think of home extension as some sort of extension of the roof, an additional room or the transformation of a garage into an additional room. However, the digital home requires a home extension of a different kind. In order for all rooms to be included into the digital home a range extender of your wireless might be necessary. Yes, this even includes the bathrooms, because otherwise you can no longer sing along your favourite tune under the shower if you are used to the streaming of the musical or orchestral accompaniment. Additionally, the immediate surroundings of a home with or without garden might make it necessary for your robot to mow properly or your digital letter box to send you the mail for the long awaited love letters while you out of home.
Being out of range in your home, is almost equal to not being home at all. Of course, you don’t have to automatically send an out of home message to all your contacts when you are too far away from your digital home for your digital device, but the comfort of a range extender may avoid the new “digital inequality” between adolescents in your home. Room choices are made according to wireless access points and signal strength rather than the room with the best view. Lots of new issues arise we did not even think we would have 10 years ago. Of course, we follow the suggestions of an AI chatbot that recommends the best location for us after we entered images and descriptions of the consistency of each wall into the system. Just a practical advice, install extensions out of reach of any toddler, because a sudden interruption of the connection will create very unpleasant surprises.

testing testing

Before the installation of the new AI chatbots or other agentic AI, they need profound testing. Wise statistics are quoted with the conviction: it is all about testing, testing, testing. Any systems that build on statistical reasoning (LLMs or machine learning) will behave erratically on what is known as an area with stronger impacts of, for example, statistical outliers. On both ends of the “normal distribution” of events or reasoning the statistical models and algorithms used in AI will produce “spurious” errors or have larger error margins on such topics a bit off the 95% of usual cases.
This means, testing, testing and testing again for the programmers of such AI systems before the release to the public or enterprise specific solutions. The tendency to keep costs of testing phases low  compared to developing costs bears obvious risks to the “precautionary principle” applied in the European Union. Testing is most important to check the WEIRD bias of the most basic AI systems. In this sense AI development has become a sociological exercise as they have to deal with “selection bias” of many kinds that could have very expensive legal consequences.
(Image: Extract from Bassano, Jacopo: Abduction of Europa by Zeus, Odessa Museum treasures at exhibition in Berlin Gemäldegalerie 2025-5).