Virtual author

« La Mort de l’auteur « . In a radical sense Roland Barthes was the first to proclaim the death of author as the sole master or mastermind of a text or speech. In fact there are many more on whose direct or indirect contributions a text is based on. However, biographical accounts of an author can only enlighten some (minor) aspects of the influences on the author and the final version of a text, (l’écriture), and the reader (lecteur). In « Le degre zero » the analysis of the different styles of Flaubert and Proust are extolled ( p. 131-139). Flaubert is characterized as the author with infinite corrections on the same texts and sources. It appears today as an endless loop of an algorithm where the stopping rule is not properly defined or implemented. Much in line with the « tabula gratulatoria » of Barthes (p. 279 of Fragments…, see image below) some AI systems return fake versions of a bibliography some readers will rely on. 

In the 21st century l’écriture has become almost inexistant without the technical support of machines, but most of all artificial intelligence. The author is dead, long live the virtual author. The assistance of spelling, grammar and style editing from software programs has widened the spectrum of coauthoring. Editors take more influence through pushing marketing potentials of authors and their writings. Based on previous manuscripts and publications it is possible to produce hallucinations of an author whereby only the author might be able to identify the virtual authorship. At best AI generates first drafts, but similar to the linguist of Barthes, AI is likely to become another brother or sister of l’ecrivain (p. 139). 

The thrust of Barthes is to highlight that there is more to a text than just the version at hand. In fact there are texts in a text or multiple versions or layers of a text. « L’enjeu de l’analyse structurale n’est pas la vérité du texte mais son pluriel » (1972, par ou commencer p.148). In conclusion, almost 50 years after the death of the author we currently witness the miraculous rebirth of the virtual author as the original deus ex machina which we always feared. Don’t worry it’s just another sibling of the original authors.

Game Tech

Gaming has moved digital and online for a long time. Networked gaming and following the best gamers online on video-platforms like twitch has captured a lot of attention from younger generations. With a real trend of gamification for industry and even public services, the digital gaming sector is moving from backstage to front end of companies and services. Public administration as a game. Enjoy the use of a public service through a game-like experience. Walk around in the metaverse world and get your admin work done. I would love to have such an experience. There are so many applications for gamification that the bottleneck is in the human resources to program all these applications. Coding the digital and virtual worlds to get real stuff done is just around the corner. The SCCON 2024 in Berlin showed these technologies next to each other. 2025 we might see integrated prototypes. I have a digital dream, others call it a vision for public services in the 21st century.

Language Tech

Inclusive societies can build on many tools including AI to lower language barriers. It is not only a question of translation, but many other forms of language come to mind. Sign language or easy language are necessary to facilitate broader access to public services. Reading out texts on webpages or Braille translation for the blind to interact through keyboards are additional forms that are available in digital communication as well. The audio description of videos and images is well advanced (reverse engineered through AI) and allows people with limited vision to fully participate in society. Audio messaging and transcription are used by almost everyone by now. Public services will open up to these channels of communication as well. The technology around languages is much more than just translation and AI-assisted learning of languages (talkpal for example). The new lingua franca is language technology, because it enables us to speak many languages at the same time even dialects or lost languages and in many voices. (Image: Extract of Josef Scharl, the newspaper reader, 1935, Neue Nationalgalerie Berlin)

Justice Tech

The digital or hybrid courtroom has become more the rule than the exception in Germany. Video conference equipment reduces costs and can speed judicial processes. Even the production of transcripts from the proceedings and circulation of documents and certificates, enhanced by AI will change the speed to exercise justice. Digital tools and technology has found its way into the courtroom and younger lawyers and judges as well as the accused or defendants will value the simplified procedures. Until this is the standard in all legal domains we shall have to wait a bit. In Germany 2026 is the deadline to install the adequate equipment and tech companies and consulting firms like Arktis are well prepared to support this overdue process. In terms of an economic theory of the judicial process a judgement that is delivered years later has to apply a discount rate of at least equal to annual inflation. For moral issues another discount rte might apply. Excessive delays of judgments may cause additional suffering on the side of victims. Justice Tech, therefore, has a role to play in the practical and theoretical debate about „doing justice“. (Image SCCON Berlin 2024-10)

AI Nobel

Artificial Intelligence has made it into the ranks of Nobel prizes in 2024. As AI is been talked about everywhere by now the Nobel Committee has deemed it expedient to award Hopfield and Hinton the Nobel Prize not in informatics, this does not exist (yet), but in physics. Neural networks focus on the links between bits of information rather than just the sheer number of data points mimics the functioning of our brains. The most remarkable statement by Hinton is probably the one of the also dangerous potential of this invention. He has already issued a disclaimer on the potential of AI in warfare or other ways to impinge on our human idea of freedoms. The discovery of the applications, good and bad, of these AI-based systems has just begun. The comparison with Nobel‘s original discovery and mass production of explosives from which the prize draws its name has hardly been more adequate recently. (Image stockholm City Library)

Artists Robots

We know that the scientific and artistic dealings with robots have a long tradition. Whereas art of impressionism took up the challenge to paint the world outside the studio and embellished technological achievements like bridges and trains post hoc, modern extensions of science fiction to the world of robotics has extrapolated from the present. Artists became forerunners of technical evolution and thereby contributed to the acceptance of artificial intelligence to broader audiences. In 2018 The “Grand Palais” in Paris hosted an exhibition on “Artists & Robots” (Pdf booklet). Jérôme Neuters contributed an essay to the catalog of the exhibition on “L’imagination artificielle” which identified a additional role for artists in combination with AI. Some of the early adopters of the new possibilities of robots assisting artists, Nicolas Schoeffer is quoted to state: “l’artiste ne crée plus une oeuvre, il crée la création”. Like an invention of painting techniques or light or perspective in painting, robots allow a new way of representation of emotions or space. (Image Manfred Mohr, 1974 video Cubic Limit, Artists & Robots p.92-93)

Broken Promises

In a library catalogue, the entry of « broken promises «  returns more than 3000 times that the title has been used. « Promises kept » is almost as popular. A rapid inspection of titles reveals that the former titles suggest more factual analyses, whereas the latter is frequently used in the form of an imperative in combination with “should be kept”. The book by Fritz Bartel “The Triumph of Boken Promises …” (2022) demonstrates the importance of the concept of broken promises in the social sciences. The rivalry between socialism, capitalism and the rise of neoliberalism is strongly influenced by the way they handle the breaking of promises made to their respective societies. The promises of increasing wealth and wellbeing have been part of all political regimes. To keep these promises is a completely different story. Especially since the first and second oil crises and many other kinds of crises, it has become much harder to keep these promises. Working hours, retirement ages or minimum wages are all at risk to no longer live up to the promises made in earlier periods. This has put welfare states under pressure that millions of voters perceive politics as a “game” of broken promises. Socialist political regimes like Russia are ready to use physical violence to shut up people that remind leaders of these broken promises. In democracies the ballot box is often used to sanction governments that do not live up to expectations of previous promises. A lot is about public infrastructure which is failing people. Migration, education, social and labor reforms are on top of the political agenda if it comes down to broken promises. The elections of the European Parliament gave many a chance to express their discontent about various broken promises. Maybe democracy is better in providing forms of letting off steam early and protracted protests rather than the Russian way to suppress any critical analysis, let alone opposition movements. Just like the move from industrial production to services as production models, with AI we are likely to see similar problems and probably also broken promises. The challenge is huge and promises should be made with an eye of what promises could be kept.

Public swimming pool closed for reconstruction 2024

AI Ghost Writer

Yes, with AI we have entered a new phase of the impact of IT. Beyond the general applications like ChatGPT there is a rapidly expanding market of AI applications with more specialized functions or capabilities. In the realm of scientific writing AI-Writer is an interesting example of the AI assisted production of scientific tests. After the specification of the topic you will receive several options to specify the content of the short paper you want to produce with AI-Writer. You may choose the headline, keywords, subtopics and the logical order of these subtopics depending on your audience. Alternatively, you leave all those decisions to the application and restrain yourself to fix the amount of words you would like the paper to have.
AI-Writer is a powerful ghost writer for much work even of advanced scientists. The quality of the paper needs to be checked by yourself, but the explicit list of references, from which AI-Writer derives its restatements of the content, is just next to it. Your ghost writer AI is likely to replace a number of persons that were previously involved to just produce literature reviews or large parts of textbooks sold to millions of students.
A much lesser known feature of such tools is the way it makes plagiarism much more transparent for the scientific communities and the public at large. These programs demonstrate the techniques of combining knowledge and the citation imperatives in a transparent, almost pedagogical way. This latter function will speed up scientific work like dissertation drafting, since the reading up and documentation of previous literature in a field is a time consuming early stage of academic degrees.
Email composition, rewording, plot generator or social media posts are additional nice-to-have features of the new AI-assistants. A lot of work that has been outsourced, for example, to lawyers, consultants or other technical professions, might equally be challenged. Ghost writers have been around for centuries. With AI for everybody, they will also be involved everywhere.
(Image screen shot of working with AI-Writer 2024-6)

AI Citation

In science we love citations. The whole issue about plagiarism is about use and abuse of citations. It is a core competence of scientists to properly cite the work of other persons who dealt with the same or similar topic. There are lots of conventions or ways of how to cite mostly defined by professional academic groups. How do we cite texts that originate from an AI-system? We shall have to establish ways of how to do this properly rather than to ignore the spreading practice of its use.
For the time being, we test AI-systems that provide references in addition to the text and even direct clickable links to the original work they use. The AI-toolbox is called “scite”. Your assistant by scite will draft a short note on a topic (for example: Minkowski space, see trial below) for you and provide the linked citations for follow-up. At the price of about 15 €/months it is affordable for students and young researchers. The texts generated will then, in many instances, acquire “intellectual property and publishing rights” by persons.
The ways to follow back on citations of AI-produced texts seems a trustworthy step ahead. The authors of millions of papers cannot claim more than the original ownership of the text. The academic mantra “publish or perish” has been turned into “publish and perish”. AI-enabled citations might alleviate the pain only a little bit. The profession of even university professors shifts as reviewer of texts from students to texts of machines.

Forecasting floods

As floods as becoming more frequent and more severe forecasting of such events is crucial. The recent example in Bavaria (Germany) of the Danube river (2nd longest in Europe) has demonstrated the role of forecasting to spur adequate behavior of people living in areas at risk of flooding. With the weather forecast announcing lots of rain for a large area the forecasting of floods needs to follow closely these trends. It is not only a question of expectations, but an issue of adaptive expectations for people to adopt appropriate precautions. In retrospect the early forecasts turned out to be fairly accurate in terms of the peak of flooding to be expected in June 2024. The Bavarian “Hochwassernachrichtendienst” (no joke, one word) forecasted on the 2nd of June about 7.50 as the peak to be reached in 2 days in the city of Kelheim. This was beyond the usual 4 warning levels based on an escalation scale. The forecast was beyond the frequent flooding levels established in the last decades. People and emergency services would have to adapt their expectations accordingly. Renewing forecasts is essential to guide people and services in their efforts to deal with emergencies and repair damages as flood levels recede. Management of crises critically depends on forecasting even if they are obviously prone to error margins which should usually be reported as well just like in weather forecasts. Adaptive expectations are key in combination with forecasts to ensure survival.

Hochwassernachrichtendienst Bayern 2024-6
Kelheim on Danube

AI Racing

AI has entered the racing of cars after we have been racing horses, dogs and camels for many decades. The fact behind all these races is the huge market for gambling. Anything you can bet on will do for juicy profits in that industry. The recent “Abu Dhabi Autonomous Racing League” is the latest addition to the racing craze. Moving online with 600000 spectators at its peak on video and gaming platforms the investment seems promising. The only problem, AI is not yet ready to really compete with the world of real drivers. The progress, however, is astonishing. Just one lap of 2 minutes on the circuit yields 15 Terrabyte of data from 50 sensors. These are closed circuits so no person can enter or animal can get in their way. The challenge to integrate more data and faster processing as well as algorithms for fast decision making is steep. Great learning opportunities for advances in robotics. The hype has not been able to live up to the expectations as no real racing took place yet. We have replaced the gladiators of the Roman empire with Formula 1 drivers. It is only fair to retire those drivers soon and let AI race cars against each other. It feels like a computer game on screen and it is as we shall most likely watch these races on a screen as well. Hence, what is the point. Watching youth on TWITCH play racing games will probably not change the viewing behavior of the masses. The programmers have nevertheless great learning opportunities and will find their way rapidly into the job market. The other challenges of ASPIRE seem more important for humanity like human rescue and food for the growing world population. In the meantime let the boys play around with cars and learn about potentials as well as failures of AI-programmers and dealing with both.

AI Disruption

Many scientists started to question the disruptive potential of AI in, for example, the military’s domain. The Journal of Strategic Studies featured 3 papers on AI and autonomous systems more generally. The major argument by Anthony King is the reliance of autonomous systems on other systems mainly human operators even in the background to get these systems off the ground and maybe back again. Not only logistic support but also satellite communication is needed to guide and protect the operations. In quoting Clausewitz, Anthony King stated that war is a “collision of two living forces”. Strategy and counter-strategy will co-evolve as will attack and defence.
Jackie G. Schneider and Julia Macdonald (2024) advocate the use of autonomous and unmanned systems for their cost effectiveness. Economic costs as well as political costs are lower for these new strategic weapons. Mass fire power from swarms of drones is much cheaper than nuclear warheads and the home electorate is assumed to be more willing to accept and support limited and more precisely targeted unmanned missions. The disruption potential of AI is huge but it is most likely an addition to the arsenals than replacing them. (Image 2 swarms of drones fly in the air above tanks, created by AI – copilot-designer 2024-4-29).

Hannover Fair

The annual science fair at Hannover is a kind of a show of things to touch and of those things that come to the public market in the near future. Most of the annual hype is about potentials of production. Rationalization, using few resources or innovative solutions of digitization are high on the agenda. Create your digital twin, save energy, make production more safe or cyber secured.
Robotics is another reason to visit the fair. Some 7 years ago I had my Sputnik experience there. The robotics company KUKA had demonstrated live the that assembling a car from pre-manufactured components takes just 10 minutes for the robots. Shortly afterwards the whole company was bought by Chinese investors. Roughly 5 years later we are swamped by cars from China. It was not that difficult to predict this at that time. Okay, we need to focus on more value added production and take our workforces (not only) in Europe along on the way. Reclaiming well-paid, unionized jobs in manufacturing, as Joe Biden does, will not be an easy task. Robots and their programming is expensive, but skilled workers, too. Hence, the solution is likely to be robot-assisted manufacturing as a kind of hybrid solution for cost-effective production systems.
Following the proceedings of the 2024 fair we are astonished to realize that visiting the fair is still a rather “physical exercise” walking through the halls. After the Covid-19 shock we expected a lot more “online content”. Instead we keep referring to webpages and newletters rather than virtual visits and tours. The preparation of the visit in advance remains a laborious adventure. However, the in-person networking activities in the industry are largely advanced by ease of exchanging virtual business cards and the “FEMWORX” activities.
This year’s Sputnik moment at Hannover is probably most likely related to the pervasive applications of AI across all areas of the industry and along the whole supply chain. Repairing and recycling have become mainstream activities (www.festo.com). Robotics for learning purposes can also be found to get you started with automating boring household tasks (www.igus.eu).
Visiting Hannover in person still involves lengthy road travel or expensive public transport (DB with ICE). Autonomous driving and ride sharing solutions might be a worthwhile topic for next year’s fair. Last year I thought we would meet in the “metaverse fair” rather than in Hannover 2024. Be prepared for another Sputnik moment next year, maybe.
(Image: Consumer’s Rest by Stiletto, Frank Schreiner, 1983)

AI Defence

For those following the development in robotics we have been astonished by the progress of, for example, rescue robots. After an earthquake such robots could enter a building that is about to collapse and search the rooms for survivors. A recent article in “Foreign Affairs” by Michèle A. Flournoy has started its thinking about the use of AI in the military with a similar 20 year old example. A small drone flying through a building and inspecting the dangers of entering for persons or soldiers. Since then technology has advanced and the use of AI for automatic detection of dangers and “neutralising” it, is no longer science fiction. The wars of today are a testing ground for AI enhanced military strategies. It is about time that social scientists get involved as well.
Warfare left to robots and AI is unlikely to respect human values unless we implement such thoughts right from the be beginning into the new technology. An advanced comprehension of what algorithms do and what data they are trained on are crucial elements to watch out for. According to Flourney, AI will assist in planning as well as logistics of the military. Additionally, AI will allow a “better understanding of what its potential adversaries might be thinking”. Checking through hours of surveillance videos is also likely to be taken over by AI as the time consuming nature of the task binds a lot of staff, that may be put to work on other tasks. Training of people and the armed forces become a crucial part of any AI strategy. The chances to develop a “responsible AI” are high in the free world that cherishes human rights and democratic values. Raising curiosity about AI and an awareness of the dangers are two sides of the same coin or bullet. Both need to grow together.
(Image created by Dall-E Copilot Prompt: “5 Robots disguised as soldiers with dash cams on helmet encircle a small house where another robot is hiding” on 2024-4-23)

AI Reader

In the middle of the hype around AI it is useful to take stock of the reflection and evolution of AI. In my own analyses and writings on AI it evident that a narrowing of focus has taken place. Whereas before 2022 the writing dealt more with digital technologies in general. The links to the literature on the social construction of technologies was obvious. Algorithms and AI was a part of the broader topic of society and technology.
This has changed. The public debate is focused on “everything AI now”. We look at technological developments largely through the lens of AI now. Hence, my focus of assessments of technology from a societal perspective follows this trend. In a collection of blog entries on AI we try to demonstrate the far reaching changes that have started to have an impact on us. In the last few months the all encompassing concern about AI’s effect on us needs full attention of social scientists, policy makers, companies and the public at large. We can no longer leave this topic to the software engineers alone. By the way, they themselves ask us to get involved and take the latest advances in AI more seriously.
As a “flipbook” the online reading is rather comfortable (Link to flipbook publisher MPL). The pdf or epub files of the blog entries allow to directly follow the links to sources in webpages or other publications (AI and Society 2p 2024-4-18). The cycles of analyses and comments have become faster. Traditional book writing suffers from time lags that risk to make pubications outdated rather quickly. Dynamic ebook writing might bridge the gap between time to reflect and speed to publish or inform the wider public. The first update as .pdf-file is available here: AI and Society(2).

AI Travel

Playing around with AI it is nice to test take fun examples. Image you want to plan a vacation, then the use of AI is ready to suggest to you a couple of things to do. Of course, AI is eager to propose travelling services like transport or accommodation to you where it is likely to earn some commissions. So far, the use of the “Vacation Planer of Microsoft’s BING Copilot” is free of charge. In entering the time period and a region as well as some basic activities you’ll receive suggestions with quotes on the sources (webpages of public services from tourist offices mostly). It seems like trustworthy sources and the suggestions of D-Day activities in Normandy is a positive surprise to me. These are popular activities which attract huge international crowds every year.
Thinking further on the potentials it becomes evident that travel suggestions will be biased to those paying for ranking higher on the algorithms selection criteria, which are not disclosed. Entering into the chat with the AI you and AI can target more precisely locations and also hotels etc. You are disclosing more of your own preferences in the easy-going chat and probably next time you will be surprised to be recommended the same activities at another location again.
So far, I have bought travel guides or literature about locations to prepare vacations. This is likely to change. I complement my traditional search or planning with the “surprises” from AI for travelling. I rediscovered, for example, the public service of tourist offices and their publications ahead of the travel rather than the leaflets at the local tourist office. In order to plan ahead there is value in the augmented search and compilation capacities of AI. Drafting a letter in foreign languages is also no problem for AI. The evaluation of the usefulness of AI, however, can only be answered after the vacation. Outdated info or databases have a huge potential to spoil the fun parts as well.

AI and languages

A big potential of AI is in the field of languages. Translations have been an expert domain and a pain for pupils at school. In professional settings translations are an expensive extra service for some or a good source of revenue. AI has shifted the translation game to a new level. In terms of speed of translating large amounts of written text AI is hard to beat. In terms of quality the battle of translaters against AI is still on. For chess players the battle against AI has been lost some years ago already. It remains an open question whether translators can still outperform AI or just adapt to using the technology themselves to improve both speed and quality of translations. The European Union with its many languages and commitment to cultural diversity can serve even more language communities with documents in their own language than before at marginally higher costs. A panel on the 9th day of translations at the „foire du livre de Bruxelles” 2024 expressed their reservations with regard to the use of AI in translation of political text or speech. Misunderstanding and misinterpretation will be the rule rather than the exception with potentially harmful consequences. Checking the correctness of translations is a permanent challenge for translators and can be very time consuming. There is room for an AI-assisted translation, but similar to other fields of application of AI, relying exclusively on AI bears high risks as well. We should not underestimate the creative part of translators to do full justice to a text or speech.

www.flb.be 2024 Translation

AI and PS

AI like in ChatGPT is guided by so-called prompts. After the entry of “what is AI” the machine returns a definition of itself. If you continue the chat with ChatGPT and enter: “Is it useful for public services” (PS), you receive an opinion of AI on its own usefulness (of course positive) and some examples in which AI in the public services have a good potential to improve the state of affairs. The AI ChatGPT is advocating AI for the PS for mainly 4 reasons: (1) efficiency purposes; (2) personalisation of services; (3) citizen engagement; (4) citizen satisfaction. (See image below). The perspective of employees of the public services is not really part of the answer by ChatGPT. This is a more ambiguous part of the answer and would probably need more space and additional explicit prompts to solicit an explicit answer on the issue. With all the know issues of concern of AI like gender bias or biased data as input, the introduction of AI in public services has to be accompanied by a thorough monitoring process. The legal limits to applications of AI are more severe in public services as the production of official documents is subject to additional security concerns.
This does certainly not preclude the use of AI in PS, but it requires more ample and rigorous testing of AI-applications in the PS. Such testing frameworks are still in development even in informatics as the sources of bias a manifold and sometimes tricky to detect even for experts in the field. Prior training with specific data sets (for example of thousands of possible prompts) has to be performed or sets of images for testing adapted to avoid bias. The task is big, but step by step building and testing promise useful results. It remains a challenge to find the right balance between the risks and the potentials of AI in PS.

AI and text

The performance of large language models (LLMs) with respect to text recognition and drafting texts is impressive. All those professions that draft a lot of texts have often decades of experience with using word-processing software. The assistance of software in the field of texts ranges from immediate typo corrections to suggestions of synonyms or grammatical corrections in previous word-processing software.
The improvement of AI stems for example from the potential to suggest alternative drafts of the text according to predefined styles. A very useful style is the “use of easy language”. This rewriting of texts simplifies texts in the sense that longer and more structured sentences are split into shorter ones, lesser-known words or acronyms are replaced by more common or simpler words. Some languages like German have a particular need to use easy language when it comes to administrative regulations and procedures. Public services that aim for inclusiveness of for example older persons or youth can become much more accessible if the use of easy language is spread more widely. Just keep in mind the large numbers of so-called “functional illiterates” (OECD study “PIAAC”) in all OCED countries.
AI can do a great job in assisting to reach a broader public with texts adapted to their level of literacy and numeracy competences. Webpage Designers have made use of Search Engine Optimization (SEO) for years now. The most common way is to use frequently searched keywords more often on your website in order to be found more often by search engines like GOOGLE et al. Additionally, AI allows to explain keywords, sentences or even jokes to you (Spriestersbach 2023 p.111). This may help in situations when cross-cultural understanding is important.
We have made use of optical character recognition (OCR) for a long time now in public services as well as firms and for private archives. AI is taking this “learning experience” to the next level by making use of the content of the recognized text. Predicting the following word or suggesting the next sentence was only the beginning of AI with respect to texts. AI can draft your speech to plead guilty or not guilty in a court. But we shall have to live with the consequences of making exclusive use of it rather than referring back to experts in the field. AI please shorten this entry, please!

AI by AI

It has become a common starting point to use electronic devices and online encyclopedias to search for definitions. Let us just do this for artificial intelligence. The open platform of Wikipedia returns on the query of “artificial intelligence” the following statement as a definition: “AI … is intelligence exhibited by machines, particularly computer systems …“. It is not like human intelligence, but tries to emulate it or even tries to improve on it. Part of any definition is also the range of applications of it in a broad range of scientific fields, economic sectors or public and private spheres of life. This shows the enormous scope of applications that keeps rapidly growing with the ease of access to software and applications of AI.
How does AI define itself? How is AI defined by AI? Putting the question to ChatGPT 3.5 in April 2024 I got the following fast return. (See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.

AI Sorting

Algorithms do the work behind AI systems. Therefore a basic understanding of how algorithms work is helpful to gauge the potential, risks and performance of such systems. The speed of computers determines the for example the amount of data you can sort at a reasonable time. Efficiency of the algorithm is an other factor. Here we go, we are already a bit absorbed into the the sorting as purely intellectual exercise. The website of Darryl Nester shows a playful programming exercise to sort numbers from 1 to 15 in a fast way (Link to play sorting). If you watch the sorting as it runs you realize that programs are much faster than us in such simple numeric tasks. Now think of applying this sorting routine or algorithm to a process of social sorting. The machine will sort social desirability scores of people’s behavior in the same simple fashion even for thousands of people. Whether proposed AI systems in human interaction or of human resource departments make use of such sorting algorithms we do not know. Sorting applicants is a computational task, but the data input of personal characteristics is derived from another more or less reliable source. Hence, the use of existing and newly available databases will create or eliminate bias. Watching sorting algorithms perform is an important learning experience to be able to critically assess what is likely to happen behind the curtains of AI.

AI and dialect

The training of Large Language Models (LLM) uses large data sets to learn about conventions of which words are combined with each other and which ones are less frequently employed in conjunction. Therefore, it does not really come as a surprise that training which uses standardised languages of American English might not be as valid for applications that receive input from minority languages or dialects. The study forthcoming in the field of Computer science and Language by Hofmann et al. (Link) provides evidence of the systematic bias against African American dialects in these models. Dialect prejudice remains a major concern in AI, just like in the day-to-day experiences of many people speaking a dialect. The study highlights that dialect speakers are more likely to be assigned less prestigious jobs if AI is used to sort applicants. Similarly, criminal sentences will harsher for speakers of African American. Even the more frequent attribution of death sentences for dialect speakers was evidenced.
If we translate this evidence to wide-spread applications of AI in the workplace, we realise that there are severe issues to resolve. The European Trade Union Congress (ETUC) has flagged the issue for some time (Link) and made recommendations of how to address these shortcomings. Human control and co-determination by employees are crucial in these applications to the world of work and employment. The need to justify decision-making concerning hiring and firing limit discrimination in the work place. This needs to be preserved in the 21st century collaborating with AI. The language barriers like dialects or multiple official languages in a country ask for a reconsideration of AI to avoid discrimination. Legal systems have to clarify the responsibilities of AI applications before too much harm has been caused.
There are huge potentials of AI as well in the preservation of dialects or interacting in a dialect. The cultural diversity may be preserved more easily, but discriminatory practices have to be eliminated from the basis of these models otherwise they become a severe legal risk for people, companies or public services who apply these large language models without careful scrutiny.
(Image AI BING Designer: 3 robots are in an office. 2 wear suits. 1 wears folklore dress. All speak to each other in a meeting. Cartoon-like style in futuristic setting)

AI and S/he

There was hope that artificial intelligence (AI) would be a better version of us. Well, so far that seems to have failed. Let us take gender bias as a pervasive feature even in modern societies, let alone the societies in medieval or industrial age. AI tends to uphold gender biases and might even reinforce them. Why? A recent paper by Kotek, Dockum, Sun (2023) explains the sources for this bias in straightforward terms. AI is based on Large Language Models. These LLMs are trained using big detailed data sets. Through the training on true observed data like detailed data on occupation by gender as observed in the U.S. in 2023, the models tend to have a status quo bias.
This means they abstract from a dynamic evolution of occupations and the potential evolution of gender stereotypes over years. Even deriving growing or decreasing trends of gender dominance in a specific occupation the models have little ground for reasonable or adequate assessment of these trends. Just like thousands of social scientists before them. Projections into the future or assuming a legal obligation of equal representation of gender might still not be in line with human perception of such trends.
Representing women in equal shares among soldiers, 50% of men as secretaries in offices appears rather utopian in 2024, but any share in-between is probably arbitrary and differs widely between countries. Even bigger data sets may account for this in some future day. For the time being these models based on “true” data sets will have a bias towards the status quo, however unsatisfactory this might be.
Now let us just develop on this research finding. Gender bias is only one source of bias among many other forms of bias or discriminatory practices. Ethnicity, age or various abilities complicate the underlying “ground truth” (term used in paper) represented in occupation data sets. The authors identify 4 major shortcoming concerning gender bias in AI based on LLMs: (1) The pronouns s/he were picked even more often than in Bureau of Labor Statistics occupational gender representations; (2) female stereotypes were more amplified than male ones; (3) ambiguity of gender attribution was not flagged as an issue; (4) when found out to be inaccurate LLMs returned “authoritative” responses, which were “often inaccurate”.
These findings have the merit to provide a testing framework for gender bias of AI. Many other biases or potential biases have to be investigated in a similarly rigorous fashion before AI will give us an authoritarian answer, no I am free of any bias in responding to your request. Full stop.

AI Collusion

In most applications of AI there is one system of AI, for example a specialized service, that performs in isolation from other services. More powerful systems, however, allow for the combination of AI services. This may be useful in case of integrating services focusing on specialized sensors to gain a more complete impression of the performance of a system. As soon as two and more AI systems become integrated the risk of unwanted or illegal collusion may occur.
Collusion is defined in the realm of economic theory as the secret, undocumented, often illegal, restriction of competition originating from at least two otherwise rival competitors. In the realm of AI collusion has been defined by Motwani et al. (2024) as “teams of communicating generative AI agents solve joint tasks”. The cooperation of agents as well as the sharing of of previously exclusive information increase the risks of violation of rights of privacy or security. The AI related risks consist also in the dilution of responsibility. It becomes more difficult to identify the origin of fraudulent use of data like personal information or contacts. Just imagine using Alexa and Siri talking to each other to develop another integrated service as a simplified example.
The use of steganography techniques, i.e. the secret embedding of code into an AI system or image distribution, can protect authorship as well as open doors for fraudulent applications. The collusion of AI systems will blur legal borders and create multiple new issues to resolve in the construction and implementation of AI agents. New issues of trust in technologies will arise if no common standards and regulations will be defined. We seem to be just at the entry of the new brave world or 1984 in 2024.
(Image: KI MS-Copilot: Three smartphones in form of different robots stand upright on a desk in a circle. Each displays text on a computer image.)

AI input

AI is crucially dependent on the input it is built on. This has been already the foundation principle of the powerful search engines like Google that have become to dominate the commercial part of the internet. The crawling of pages on the world wide web and classifying/ranking them with a number of criteria has been the successful business model. The content production was and is done by billions of people across the globe. Open access facilitates the amount of data available.
The business case for AI is not much different. At the 30th anniversary of the “Robots Exclusion Standard” we have to build on these original ideas to rethink our input strategies for AI as well. If there are parts of our input we do not AI to use in its algorithms we have to put up red flags in form of unlisting parts of the information we allow for public access. This is standard routine we might believe, but everything on the cloud might have made it much easier for owners of the cloud space to “crawl” your information, pictures or media files. Some owners of big data collections have decided to sell the access and use to their treasures. AI can then learn from these data.
Restrictions become also clear. More up-to-date information might not be available for AI-treatment. AI might lack the most recent information, if it a kind of breaking news. The strength of AI lies in the size of data input it can handle and treat or recombine. The deficiency of AI is not to know whether the information it uses (is in the data base) is valid or trustworthy. Wrong or outdated input due to a legal change or just-in-time change will be beyond its scope. Therefore, the algorithms have a latent risk involved, i.e. a bias towards the status quo. But the learning algorithms can deal with this and come up with a continued learning or improvement of routines. In such a process it is crucial to have ample feedback on the valid or invalid outcome of the algorithm. Controlling and evaluating outcomes becomes the complementary task for humans as well as AI. Checks and balances like in democratic political systems become more and more important.