Composing Assisted

Before the existence of digital composition tools composers were assisted by “Kopisten”. These persons rewrote the original draft of a composition into a “proper” version of the original document. Musicology has a tough time to deal with deviations from the original. It needs to be clarified which is the final and authorized version. In some instances this is far from evident. Just as an example Robert Schumann made ample use of the assistance of Kopist Otto Hermann Klausnitz (cf Nr 6), sometimes for the preparation of the composition, the finalized versions or the explicit drafting of different voices. Klausnitz himself was a flautist in Leipzig (Gewandhausorchester) and a conductor in Duesseldorf. Overall the debate is still going on, whether the composer’s draft or in many instances the Kopist’s version of the composition (authorized or not) prevails. In the age of AI, which is highly influential in modern music, such questions will most likely be intrinsic to the process of composition as well. AI is influential in evening out rough edges. Anette Mueller (2010) has done a great job to make this work of “Kopisten” much more transparent and her concluding chapter is programmatically entitled “Komponist und Kopist- Aspekte einer produktiven Kooperation”. (Image Mueller, A 2010 p. 340).

Broken Promises

In a library catalogue, the entry of « broken promises «  returns more than 3000 times that the title has been used. « Promises kept » is almost as popular. A rapid inspection of titles reveals that the former titles suggest more factual analyses, whereas the latter is frequently used in the form of an imperative in combination with “should be kept”. The book by Fritz Bartel “The Triumph of Boken Promises …” (2022) demonstrates the importance of the concept of broken promises in the social sciences. The rivalry between socialism, capitalism and the rise of neoliberalism is strongly influenced by the way they handle the breaking of promises made to their respective societies. The promises of increasing wealth and wellbeing have been part of all political regimes. To keep these promises is a completely different story. Especially since the first and second oil crises and many other kinds of crises, it has become much harder to keep these promises. Working hours, retirement ages or minimum wages are all at risk to no longer live up to the promises made in earlier periods. This has put welfare states under pressure that millions of voters perceive politics as a “game” of broken promises. Socialist political regimes like Russia are ready to use physical violence to shut up people that remind leaders of these broken promises. In democracies the ballot box is often used to sanction governments that do not live up to expectations of previous promises. A lot is about public infrastructure which is failing people. Migration, education, social and labor reforms are on top of the political agenda if it comes down to broken promises. The elections of the European Parliament gave many a chance to express their discontent about various broken promises. Maybe democracy is better in providing forms of letting off steam early and protracted protests rather than the Russian way to suppress any critical analysis, let alone opposition movements. Just like the move from industrial production to services as production models, with AI we are likely to see similar problems and probably also broken promises. The challenge is huge and promises should be made with an eye of what promises could be kept.

Public swimming pool closed for reconstruction 2024

AI Ghost Writer

Yes, with AI we have entered a new phase of the impact of IT. Beyond the general applications like ChatGPT there is a rapidly expanding market of AI applications with more specialized functions or capabilities. In the realm of scientific writing AI-Writer is an interesting example of the AI assisted production of scientific tests. After the specification of the topic you will receive several options to specify the content of the short paper you want to produce with AI-Writer. You may choose the headline, keywords, subtopics and the logical order of these subtopics depending on your audience. Alternatively, you leave all those decisions to the application and restrain yourself to fix the amount of words you would like the paper to have.
AI-Writer is a powerful ghost writer for much work even of advanced scientists. The quality of the paper needs to be checked by yourself, but the explicit list of references, from which AI-Writer derives its restatements of the content, is just next to it. Your ghost writer AI is likely to replace a number of persons that were previously involved to just produce literature reviews or large parts of textbooks sold to millions of students.
A much lesser known feature of such tools is the way it makes plagiarism much more transparent for the scientific communities and the public at large. These programs demonstrate the techniques of combining knowledge and the citation imperatives in a transparent, almost pedagogical way. This latter function will speed up scientific work like dissertation drafting, since the reading up and documentation of previous literature in a field is a time consuming early stage of academic degrees.
Email composition, rewording, plot generator or social media posts are additional nice-to-have features of the new AI-assistants. A lot of work that has been outsourced, for example, to lawyers, consultants or other technical professions, might equally be challenged. Ghost writers have been around for centuries. With AI for everybody, they will also be involved everywhere.
(Image screen shot of working with AI-Writer 2024-6)

AI Citation

In science we love citations. The whole issue about plagiarism is about use and abuse of citations. It is a core competence of scientists to properly cite the work of other persons who dealt with the same or similar topic. There are lots of conventions or ways of how to cite mostly defined by professional academic groups. How do we cite texts that originate from an AI-system? We shall have to establish ways of how to do this properly rather than to ignore the spreading practice of its use.
For the time being, we test AI-systems that provide references in addition to the text and even direct clickable links to the original work they use. The AI-toolbox is called “scite”. Your assistant by scite will draft a short note on a topic (for example: Minkowski space, see trial below) for you and provide the linked citations for follow-up. At the price of about 15 €/months it is affordable for students and young researchers. The texts generated will then, in many instances, acquire “intellectual property and publishing rights” by persons.
The ways to follow back on citations of AI-produced texts seems a trustworthy step ahead. The authors of millions of papers cannot claim more than the original ownership of the text. The academic mantra “publish or perish” has been turned into “publish and perish”. AI-enabled citations might alleviate the pain only a little bit. The profession of even university professors shifts as reviewer of texts from students to texts of machines.

Law Nature

There exists a rather complicated relationship between law and nature. It is part of constitutional law to check whether nature figures at all in a state’s constitution as part of the fundamental legal principles. On a global scale the nations or people living in the closest relationship with nature most often do not have written constitutions. In the same vein, animals or biodiversity do not figure in most constitutional documents (nice project to substantiate this claim). The philosophy of law has line of literature devoted to “Naturrecht” which is more concerned with human beings and their differentiation than the millions of other species.
Administrative law is probably the domain with most of the legal judgements with relevance to nature or the environment as for example any larger scale construction is either land, water, air or biodiversity grabbing. Rights and limits need to be defined precisely. In this field the role of law as “appeasement” is widely applied. However, this is more complicated in cases when a whole population of an island in the ocean is threatened to disappear due to the rise of the sea level like in the case of the Torres Strait Islands, next to and part of Australia.
The UN Human Rights Committee (UN-HRCee) in Geneva has made a decision on the claim of these people to have rights that the nature of the islands as low-lying islands is threatened by disrespect of their fundamental rights of existence and survival. The claim has been received by the court, but the court deems that the threat to their culture and survival is not imminent. In practice, therefore, the sword of law is rather weak and time until the disaster is used as a right to continue the usual economic exploitation of earth as before despite the deferred consequences for the planet in a rather unequal way.
(Image by AI copilot designer 2024-6-2 “5 judges in red gowns sit in a flooded courtroom”, 2 propsitions)

AI Racing

AI has entered the racing of cars after we have been racing horses, dogs and camels for many decades. The fact behind all these races is the huge market for gambling. Anything you can bet on will do for juicy profits in that industry. The recent “Abu Dhabi Autonomous Racing League” is the latest addition to the racing craze. Moving online with 600000 spectators at its peak on video and gaming platforms the investment seems promising. The only problem, AI is not yet ready to really compete with the world of real drivers. The progress, however, is astonishing. Just one lap of 2 minutes on the circuit yields 15 Terrabyte of data from 50 sensors. These are closed circuits so no person can enter or animal can get in their way. The challenge to integrate more data and faster processing as well as algorithms for fast decision making is steep. Great learning opportunities for advances in robotics. The hype has not been able to live up to the expectations as no real racing took place yet. We have replaced the gladiators of the Roman empire with Formula 1 drivers. It is only fair to retire those drivers soon and let AI race cars against each other. It feels like a computer game on screen and it is as we shall most likely watch these races on a screen as well. Hence, what is the point. Watching youth on TWITCH play racing games will probably not change the viewing behavior of the masses. The programmers have nevertheless great learning opportunities and will find their way rapidly into the job market. The other challenges of ASPIRE seem more important for humanity like human rescue and food for the growing world population. In the meantime let the boys play around with cars and learn about potentials as well as failures of AI-programmers and dealing with both.

AI Disruption

Many scientists started to question the disruptive potential of AI in, for example, the military’s domain. The Journal of Strategic Studies featured 3 papers on AI and autonomous systems more generally. The major argument by Anthony King is the reliance of autonomous systems on other systems mainly human operators even in the background to get these systems off the ground and maybe back again. Not only logistic support but also satellite communication is needed to guide and protect the operations. In quoting Clausewitz, Anthony King stated that war is a “collision of two living forces”. Strategy and counter-strategy will co-evolve as will attack and defence.
Jackie G. Schneider and Julia Macdonald (2024) advocate the use of autonomous and unmanned systems for their cost effectiveness. Economic costs as well as political costs are lower for these new strategic weapons. Mass fire power from swarms of drones is much cheaper than nuclear warheads and the home electorate is assumed to be more willing to accept and support limited and more precisely targeted unmanned missions. The disruption potential of AI is huge but it is most likely an addition to the arsenals than replacing them. (Image 2 swarms of drones fly in the air above tanks, created by AI – copilot-designer 2024-4-29).

Hannover Fair

The annual science fair at Hannover is a kind of a show of things to touch and of those things that come to the public market in the near future. Most of the annual hype is about potentials of production. Rationalization, using few resources or innovative solutions of digitization are high on the agenda. Create your digital twin, save energy, make production more safe or cyber secured.
Robotics is another reason to visit the fair. Some 7 years ago I had my Sputnik experience there. The robotics company KUKA had demonstrated live the that assembling a car from pre-manufactured components takes just 10 minutes for the robots. Shortly afterwards the whole company was bought by Chinese investors. Roughly 5 years later we are swamped by cars from China. It was not that difficult to predict this at that time. Okay, we need to focus on more value added production and take our workforces (not only) in Europe along on the way. Reclaiming well-paid, unionized jobs in manufacturing, as Joe Biden does, will not be an easy task. Robots and their programming is expensive, but skilled workers, too. Hence, the solution is likely to be robot-assisted manufacturing as a kind of hybrid solution for cost-effective production systems.
Following the proceedings of the 2024 fair we are astonished to realize that visiting the fair is still a rather “physical exercise” walking through the halls. After the Covid-19 shock we expected a lot more “online content”. Instead we keep referring to webpages and newletters rather than virtual visits and tours. The preparation of the visit in advance remains a laborious adventure. However, the in-person networking activities in the industry are largely advanced by ease of exchanging virtual business cards and the “FEMWORX” activities.
This year’s Sputnik moment at Hannover is probably most likely related to the pervasive applications of AI across all areas of the industry and along the whole supply chain. Repairing and recycling have become mainstream activities (www.festo.com). Robotics for learning purposes can also be found to get you started with automating boring household tasks (www.igus.eu).
Visiting Hannover in person still involves lengthy road travel or expensive public transport (DB with ICE). Autonomous driving and ride sharing solutions might be a worthwhile topic for next year’s fair. Last year I thought we would meet in the “metaverse fair” rather than in Hannover 2024. Be prepared for another Sputnik moment next year, maybe.
(Image: Consumer’s Rest by Stiletto, Frank Schreiner, 1983)

AI Defence

For those following the development in robotics we have been astonished by the progress of, for example, rescue robots. After an earthquake such robots could enter a building that is about to collapse and search the rooms for survivors. A recent article in “Foreign Affairs” by Michèle A. Flournoy has started its thinking about the use of AI in the military with a similar 20 year old example. A small drone flying through a building and inspecting the dangers of entering for persons or soldiers. Since then technology has advanced and the use of AI for automatic detection of dangers and “neutralising” it, is no longer science fiction. The wars of today are a testing ground for AI enhanced military strategies. It is about time that social scientists get involved as well.
Warfare left to robots and AI is unlikely to respect human values unless we implement such thoughts right from the be beginning into the new technology. An advanced comprehension of what algorithms do and what data they are trained on are crucial elements to watch out for. According to Flourney, AI will assist in planning as well as logistics of the military. Additionally, AI will allow a “better understanding of what its potential adversaries might be thinking”. Checking through hours of surveillance videos is also likely to be taken over by AI as the time consuming nature of the task binds a lot of staff, that may be put to work on other tasks. Training of people and the armed forces become a crucial part of any AI strategy. The chances to develop a “responsible AI” are high in the free world that cherishes human rights and democratic values. Raising curiosity about AI and an awareness of the dangers are two sides of the same coin or bullet. Both need to grow together.
(Image created by Dall-E Copilot Prompt: “5 Robots disguised as soldiers with dash cams on helmet encircle a small house where another robot is hiding” on 2024-4-23)

AI Reader

In the middle of the hype around AI it is useful to take stock of the reflection and evolution of AI. In my own analyses and writings on AI it evident that a narrowing of focus has taken place. Whereas before 2022 the writing dealt more with digital technologies in general. The links to the literature on the social construction of technologies was obvious. Algorithms and AI was a part of the broader topic of society and technology.
This has changed. The public debate is focused on “everything AI now”. We look at technological developments largely through the lens of AI now. Hence, my focus of assessments of technology from a societal perspective follows this trend. In a collection of blog entries on AI we try to demonstrate the far reaching changes that have started to have an impact on us. In the last few months the all encompassing concern about AI’s effect on us needs full attention of social scientists, policy makers, companies and the public at large. We can no longer leave this topic to the software engineers alone. By the way, they themselves ask us to get involved and take the latest advances in AI more seriously.
As a “flipbook” the online reading is rather comfortable (Link to flipbook publisher MPL). The pdf or epub files of the blog entries allow to directly follow the links to sources in webpages or other publications (AI and Society 2p 2024-4-18). The cycles of analyses and comments have become faster. Traditional book writing suffers from time lags that risk to make pubications outdated rather quickly. Dynamic ebook writing might bridge the gap between time to reflect and speed to publish or inform the wider public. The first update as .pdf-file is available here: AI and Society(2).

AI Travel

Playing around with AI it is nice to test take fun examples. Image you want to plan a vacation, then the use of AI is ready to suggest to you a couple of things to do. Of course, AI is eager to propose travelling services like transport or accommodation to you where it is likely to earn some commissions. So far, the use of the “Vacation Planer of Microsoft’s BING Copilot” is free of charge. In entering the time period and a region as well as some basic activities you’ll receive suggestions with quotes on the sources (webpages of public services from tourist offices mostly). It seems like trustworthy sources and the suggestions of D-Day activities in Normandy is a positive surprise to me. These are popular activities which attract huge international crowds every year.
Thinking further on the potentials it becomes evident that travel suggestions will be biased to those paying for ranking higher on the algorithms selection criteria, which are not disclosed. Entering into the chat with the AI you and AI can target more precisely locations and also hotels etc. You are disclosing more of your own preferences in the easy-going chat and probably next time you will be surprised to be recommended the same activities at another location again.
So far, I have bought travel guides or literature about locations to prepare vacations. This is likely to change. I complement my traditional search or planning with the “surprises” from AI for travelling. I rediscovered, for example, the public service of tourist offices and their publications ahead of the travel rather than the leaflets at the local tourist office. In order to plan ahead there is value in the augmented search and compilation capacities of AI. Drafting a letter in foreign languages is also no problem for AI. The evaluation of the usefulness of AI, however, can only be answered after the vacation. Outdated info or databases have a huge potential to spoil the fun parts as well.

AI and languages

A big potential of AI is in the field of languages. Translations have been an expert domain and a pain for pupils at school. In professional settings translations are an expensive extra service for some or a good source of revenue. AI has shifted the translation game to a new level. In terms of speed of translating large amounts of written text AI is hard to beat. In terms of quality the battle of translaters against AI is still on. For chess players the battle against AI has been lost some years ago already. It remains an open question whether translators can still outperform AI or just adapt to using the technology themselves to improve both speed and quality of translations. The European Union with its many languages and commitment to cultural diversity can serve even more language communities with documents in their own language than before at marginally higher costs. A panel on the 9th day of translations at the „foire du livre de Bruxelles” 2024 expressed their reservations with regard to the use of AI in translation of political text or speech. Misunderstanding and misinterpretation will be the rule rather than the exception with potentially harmful consequences. Checking the correctness of translations is a permanent challenge for translators and can be very time consuming. There is room for an AI-assisted translation, but similar to other fields of application of AI, relying exclusively on AI bears high risks as well. We should not underestimate the creative part of translators to do full justice to a text or speech.

www.flb.be 2024 Translation

AI and PS

AI like in ChatGPT is guided by so-called prompts. After the entry of “what is AI” the machine returns a definition of itself. If you continue the chat with ChatGPT and enter: “Is it useful for public services” (PS), you receive an opinion of AI on its own usefulness (of course positive) and some examples in which AI in the public services have a good potential to improve the state of affairs. The AI ChatGPT is advocating AI for the PS for mainly 4 reasons: (1) efficiency purposes; (2) personalisation of services; (3) citizen engagement; (4) citizen satisfaction. (See image below). The perspective of employees of the public services is not really part of the answer by ChatGPT. This is a more ambiguous part of the answer and would probably need more space and additional explicit prompts to solicit an explicit answer on the issue. With all the know issues of concern of AI like gender bias or biased data as input, the introduction of AI in public services has to be accompanied by a thorough monitoring process. The legal limits to applications of AI are more severe in public services as the production of official documents is subject to additional security concerns.
This does certainly not preclude the use of AI in PS, but it requires more ample and rigorous testing of AI-applications in the PS. Such testing frameworks are still in development even in informatics as the sources of bias a manifold and sometimes tricky to detect even for experts in the field. Prior training with specific data sets (for example of thousands of possible prompts) has to be performed or sets of images for testing adapted to avoid bias. The task is big, but step by step building and testing promise useful results. It remains a challenge to find the right balance between the risks and the potentials of AI in PS.

AI and text

The performance of large language models (LLMs) with respect to text recognition and drafting texts is impressive. All those professions that draft a lot of texts have often decades of experience with using word-processing software. The assistance of software in the field of texts ranges from immediate typo corrections to suggestions of synonyms or grammatical corrections in previous word-processing software.
The improvement of AI stems for example from the potential to suggest alternative drafts of the text according to predefined styles. A very useful style is the “use of easy language”. This rewriting of texts simplifies texts in the sense that longer and more structured sentences are split into shorter ones, lesser-known words or acronyms are replaced by more common or simpler words. Some languages like German have a particular need to use easy language when it comes to administrative regulations and procedures. Public services that aim for inclusiveness of for example older persons or youth can become much more accessible if the use of easy language is spread more widely. Just keep in mind the large numbers of so-called “functional illiterates” (OECD study “PIAAC”) in all OCED countries.
AI can do a great job in assisting to reach a broader public with texts adapted to their level of literacy and numeracy competences. Webpage Designers have made use of Search Engine Optimization (SEO) for years now. The most common way is to use frequently searched keywords more often on your website in order to be found more often by search engines like GOOGLE et al. Additionally, AI allows to explain keywords, sentences or even jokes to you (Spriestersbach 2023 p.111). This may help in situations when cross-cultural understanding is important.
We have made use of optical character recognition (OCR) for a long time now in public services as well as firms and for private archives. AI is taking this “learning experience” to the next level by making use of the content of the recognized text. Predicting the following word or suggesting the next sentence was only the beginning of AI with respect to texts. AI can draft your speech to plead guilty or not guilty in a court. But we shall have to live with the consequences of making exclusive use of it rather than referring back to experts in the field. AI please shorten this entry, please!

AI by AI

It has become a common starting point to use electronic devices and online encyclopedias to search for definitions. Let us just do this for artificial intelligence. The open platform of Wikipedia returns on the query of “artificial intelligence” the following statement as a definition: “AI … is intelligence exhibited by machines, particularly computer systems …“. It is not like human intelligence, but tries to emulate it or even tries to improve on it. Part of any definition is also the range of applications of it in a broad range of scientific fields, economic sectors or public and private spheres of life. This shows the enormous scope of applications that keeps rapidly growing with the ease of access to software and applications of AI.
How does AI define itself? How is AI defined by AI? Putting the question to ChatGPT 3.5 in April 2024 I got the following fast return. (See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.

AI Sorting

Algorithms do the work behind AI systems. Therefore a basic understanding of how algorithms work is helpful to gauge the potential, risks and performance of such systems. The speed of computers determines the for example the amount of data you can sort at a reasonable time. Efficiency of the algorithm is an other factor. Here we go, we are already a bit absorbed into the the sorting as purely intellectual exercise. The website of Darryl Nester shows a playful programming exercise to sort numbers from 1 to 15 in a fast way (Link to play sorting). If you watch the sorting as it runs you realize that programs are much faster than us in such simple numeric tasks. Now think of applying this sorting routine or algorithm to a process of social sorting. The machine will sort social desirability scores of people’s behavior in the same simple fashion even for thousands of people. Whether proposed AI systems in human interaction or of human resource departments make use of such sorting algorithms we do not know. Sorting applicants is a computational task, but the data input of personal characteristics is derived from another more or less reliable source. Hence, the use of existing and newly available databases will create or eliminate bias. Watching sorting algorithms perform is an important learning experience to be able to critically assess what is likely to happen behind the curtains of AI.

AI and dialect

The training of Large Language Models (LLM) uses large data sets to learn about conventions of which words are combined with each other and which ones are less frequently employed in conjunction. Therefore, it does not really come as a surprise that training which uses standardised languages of American English might not be as valid for applications that receive input from minority languages or dialects. The study forthcoming in the field of Computer science and Language by Hofmann et al. (Link) provides evidence of the systematic bias against African American dialects in these models. Dialect prejudice remains a major concern in AI, just like in the day-to-day experiences of many people speaking a dialect. The study highlights that dialect speakers are more likely to be assigned less prestigious jobs if AI is used to sort applicants. Similarly, criminal sentences will harsher for speakers of African American. Even the more frequent attribution of death sentences for dialect speakers was evidenced.
If we translate this evidence to wide-spread applications of AI in the workplace, we realise that there are severe issues to resolve. The European Trade Union Congress (ETUC) has flagged the issue for some time (Link) and made recommendations of how to address these shortcomings. Human control and co-determination by employees are crucial in these applications to the world of work and employment. The need to justify decision-making concerning hiring and firing limit discrimination in the work place. This needs to be preserved in the 21st century collaborating with AI. The language barriers like dialects or multiple official languages in a country ask for a reconsideration of AI to avoid discrimination. Legal systems have to clarify the responsibilities of AI applications before too much harm has been caused.
There are huge potentials of AI as well in the preservation of dialects or interacting in a dialect. The cultural diversity may be preserved more easily, but discriminatory practices have to be eliminated from the basis of these models otherwise they become a severe legal risk for people, companies or public services who apply these large language models without careful scrutiny.
(Image AI BING Designer: 3 robots are in an office. 2 wear suits. 1 wears folklore dress. All speak to each other in a meeting. Cartoon-like style in futuristic setting)

AI and S/he

There was hope that artificial intelligence (AI) would be a better version of us. Well, so far that seems to have failed. Let us take gender bias as a pervasive feature even in modern societies, let alone the societies in medieval or industrial age. AI tends to uphold gender biases and might even reinforce them. Why? A recent paper by Kotek, Dockum, Sun (2023) explains the sources for this bias in straightforward terms. AI is based on Large Language Models. These LLMs are trained using big detailed data sets. Through the training on true observed data like detailed data on occupation by gender as observed in the U.S. in 2023, the models tend to have a status quo bias.
This means they abstract from a dynamic evolution of occupations and the potential evolution of gender stereotypes over years. Even deriving growing or decreasing trends of gender dominance in a specific occupation the models have little ground for reasonable or adequate assessment of these trends. Just like thousands of social scientists before them. Projections into the future or assuming a legal obligation of equal representation of gender might still not be in line with human perception of such trends.
Representing women in equal shares among soldiers, 50% of men as secretaries in offices appears rather utopian in 2024, but any share in-between is probably arbitrary and differs widely between countries. Even bigger data sets may account for this in some future day. For the time being these models based on “true” data sets will have a bias towards the status quo, however unsatisfactory this might be.
Now let us just develop on this research finding. Gender bias is only one source of bias among many other forms of bias or discriminatory practices. Ethnicity, age or various abilities complicate the underlying “ground truth” (term used in paper) represented in occupation data sets. The authors identify 4 major shortcoming concerning gender bias in AI based on LLMs: (1) The pronouns s/he were picked even more often than in Bureau of Labor Statistics occupational gender representations; (2) female stereotypes were more amplified than male ones; (3) ambiguity of gender attribution was not flagged as an issue; (4) when found out to be inaccurate LLMs returned “authoritative” responses, which were “often inaccurate”.
These findings have the merit to provide a testing framework for gender bias of AI. Many other biases or potential biases have to be investigated in a similarly rigorous fashion before AI will give us an authoritarian answer, no I am free of any bias in responding to your request. Full stop.

Personal Health

Most people would agree, health is a personal issue. From the onset of life, we have package of genes that predetermine a number of factors of our personal health. Epigenetics has taught us there are many factors to take into account additionally. Environmental factors have huge impacts as well. Improvements in the availability of medical devices in the hands of individuals as well as AI systems on portable devices like smartphones facilitate the monitoring of personal health. Several indicators of early-onset of illness can be retrieved from such devices. Dunn et al. (2024) show that prior to the onset of symptoms of Covid-19 or influenza portable devices can indicate the presence of infections through indicators of resting body temperature, heart rate/min, heart rate variability/millisecond or respiratory rate/min. Combined with the indicators of air quality, indoors as well as outdoors, the presence of allergens a much more personalized data set emerges which can easily be part of an AI-assisted diagnosis. More abundant personal health data and analytical power allows remote and digital health applications to inform patients, medical doctors and the public at large. Digital health technologies are only at the beginning to unfold their potential. Prevention becomes more feasible using such devices, medical professionals should be allowed to focus on interpretation of data and treatment rather than simple data gathering. Thinking about digital health technologies points in the direction of dealing with climate and environmental hazards as sickening causes more forcefully. Personal medicine and personal health are, after all, still heavily dependent on health and safety at work, commuting practices and all sorts of pollution. Personal health, however, is a good starting point to raise awareness of the potentials of digital health technologies to better our lives.
(Image: AI MS-Copilot: 2 robots run in a city. They sweat. The air is full of smog. 2 other robots rest near pool. All look at their wrist watch showing heart beats)

Error 444

The error message 444 is a not so rare encounter while surfing on the web. The error code 444 stands for the message that from the side of the server the connection is closed without any additional message. The occurrence leaves you without further indication of what exactly went wrong in building a connection to a web service or website. You just simply get shut out and that’s it. It may be tough on you if concerns your online banking or other access to vital services delivered through the internet.
In organization science and social science there are many new studies dealing with the finding, dealing, coping or handling errors. It has become much more acceptable to deal openly with errors, blunders or failures. From an individual as well as organizational perspective the cultures that deal openly with these events seem to have a certain advantage to overcome the consequences of errors at all or faster than others.
In some biographies failures are part of the lessons learned throughout life. It is deemed important to acknowledge failures or paths not taken for better or worse. Organizations just like individuals seem to learn more intensively from their failures or omissions than from everything seemingly running smoothly. Learning curves are faster also for “machine learning” if you have access to a huge data set which contains ample data on failures rather than encountering failures after release. Hence, the compilation of errors is an important part or early stage of a learning process. Failed today? Fail again tomorrow. You’ll be really strong the days after although it might hurt.

AI Collusion

In most applications of AI there is one system of AI, for example a specialized service, that performs in isolation from other services. More powerful systems, however, allow for the combination of AI services. This may be useful in case of integrating services focusing on specialized sensors to gain a more complete impression of the performance of a system. As soon as two and more AI systems become integrated the risk of unwanted or illegal collusion may occur.
Collusion is defined in the realm of economic theory as the secret, undocumented, often illegal, restriction of competition originating from at least two otherwise rival competitors. In the realm of AI collusion has been defined by Motwani et al. (2024) as “teams of communicating generative AI agents solve joint tasks”. The cooperation of agents as well as the sharing of of previously exclusive information increase the risks of violation of rights of privacy or security. The AI related risks consist also in the dilution of responsibility. It becomes more difficult to identify the origin of fraudulent use of data like personal information or contacts. Just imagine using Alexa and Siri talking to each other to develop another integrated service as a simplified example.
The use of steganography techniques, i.e. the secret embedding of code into an AI system or image distribution, can protect authorship as well as open doors for fraudulent applications. The collusion of AI systems will blur legal borders and create multiple new issues to resolve in the construction and implementation of AI agents. New issues of trust in technologies will arise if no common standards and regulations will be defined. We seem to be just at the entry of the new brave world or 1984 in 2024.
(Image: KI MS-Copilot: Three smartphones in form of different robots stand upright on a desk in a circle. Each displays text on a computer image.)

AI input

AI is crucially dependent on the input it is built on. This has been already the foundation principle of the powerful search engines like Google that have become to dominate the commercial part of the internet. The crawling of pages on the world wide web and classifying/ranking them with a number of criteria has been the successful business model. The content production was and is done by billions of people across the globe. Open access facilitates the amount of data available.
The business case for AI is not much different. At the 30th anniversary of the “Robots Exclusion Standard” we have to build on these original ideas to rethink our input strategies for AI as well. If there are parts of our input we do not AI to use in its algorithms we have to put up red flags in form of unlisting parts of the information we allow for public access. This is standard routine we might believe, but everything on the cloud might have made it much easier for owners of the cloud space to “crawl” your information, pictures or media files. Some owners of big data collections have decided to sell the access and use to their treasures. AI can then learn from these data.
Restrictions become also clear. More up-to-date information might not be available for AI-treatment. AI might lack the most recent information, if it a kind of breaking news. The strength of AI lies in the size of data input it can handle and treat or recombine. The deficiency of AI is not to know whether the information it uses (is in the data base) is valid or trustworthy. Wrong or outdated input due to a legal change or just-in-time change will be beyond its scope. Therefore, the algorithms have a latent risk involved, i.e. a bias towards the status quo. But the learning algorithms can deal with this and come up with a continued learning or improvement of routines. In such a process it is crucial to have ample feedback on the valid or invalid outcome of the algorithm. Controlling and evaluating outcomes becomes the complementary task for humans as well as AI. Checks and balances like in democratic political systems become more and more important.

Sepsis

Sepsis is a major cause of mortality. Therefore, early detection of sepsis is of high importance. Antibiotics constitute a powerful antidote. However, the application of antibiotics without need, i.e. for purely risk reduction in general, has side effects in antibiotics losing their effectiveness later on.
The paper published in The Lancet Digital Health by van der Weijden et al. (2024) reports on the effort to provide an open source access to a calculator of early onset of sepsis (Link). The Neonatal early-onset sepsis calculator developed by Kaiser Permanente builds on the use on the risk carried by mothers like time since membrane rupture, regional infection risks of mothers per 1000 population and the infants presentation at birth. It is important to point out the combination of risks put into the calculator. New systems of artificial intelligence might equally make predictions or recommendations about the application of antibiotics implicitly making use of such a calculator without disclosure.
From a sociological point of view it is interesting to scrutinize the indicators used in the calculation. The approximation of mothers carrying a sepsis risk relies on national, regional or better local indicators. This information is rarely accessible to the public. The choice of a hospital, speed of access to it in case of membrane rupture as well as staffing come into the calculation of an overall risk of sepsis.
It is great to follow the progress of digital health and the increased transparency of critical health decisions at the earliest stages of the life course. Inflammation as a precursor of sepsis should be taken serious at all stages of the life course. (Image calculation based on Kaiser Permanente digital tool Link)

AI and Behavior

We start to analyze the impact of AI on our behavior. It is an important question to be aware of not only how we interact with AI (Link), but also what effect the use of AI (disclosed or not) will have on our social behavior. Knowing that AI is used might change our willingness to cooperate or increase or decrease pro-social behavior. The use of AI in form of an algorithm to select job candidates might introduce a specific bias, but it can equally be constructed to favour certain criteria in the selection of candidates. The choice of criteria becomes more important in this process and the process of choosing those criteria.
Next comes the question whether the announcement includes as information that AI will be used in the selection process. This can be interpreted by some that a “more objective” procedure might be applied, whereas other persons interpret this signal as bad sign of an anonymous process and lack of compassion prevalent in the organization focused mostly on efficiency of procedures.  Fabian Dvorak, Regina Stumpf et al. (2024) demonstrate with experimental evidence from various forms of games (prisoner’s dilemma, binary trust game, ultimatum game) that a a whole range of outcomes is negatively affected (trust, cooperation, coordination and fairness). This has serious consequences for society. The social fabric might worsen if AI is widely applied. Even or particularly the undisclosed use of AI already shows up as a lack of trust in the majority of persons in these experiments.
In sum, we are likely to change our behavior if we suspect AI is involved the selection process or content creation. This should be a serious warning to all sorts of content producing media, science, public and private organizations. It feels a bit like with  microplastic or PFAS. At the beginning we did not take it seriously and then before long AI is likely to be everywhere without us knowing or aware of the use. (Image taken on Frankfurt book fair 2017-10!)

AI or I

Generative AI receives a lot of attention. One of the main issues is, to study how AI interacts with humans. The hiring decision by managers or an AI algorithm is an interesting application. According to Marie-Pierre Dargnies et al. (2022) the preference for human decisions remains strong despite reasonably unbiased performance of an algorithm. The main issue is with the transparency of the algorithmic decision-making. As a worker or as a hiring manager the preferences continue to sit with the person rather than the AI. It is a worrying outcome, however, that if the rule of gender equality is removed from the algorithm both workers and managers tend to prefer the algorithmic outcome. I interpret this as a latent preference of study participants for gender bias, which could lead them to expect a more favoured outcome in case the AI makes the decision. Knowing what decision-making rules have gone into the hiring algorithm has an impact on all persons involved.
A new managerial competence is to be able to assess tasks carefully, whether you should perform the task yourself or delegate to AI. In this sense the old question: to do the task yourself or to delegate has simply been enlarged by an additional delegation option. The decision-tree goes from (1) To delegate or not to delegate, and (2) if I want/need to delegate, should I delegate to AI or somebody in person (not allowed to use AI).
I opted to use AI for image creation rather than to take a photo myself or by one from a professional photographer. (Image creation: NEUROFLASH AI – Image-Flash 2024-1-26)