AI Defence

For those following the development in robotics we have been astonished by the progress of, for example, rescue robots. After an earthquake such robots could enter a building that is about to collapse and search the rooms for survivors. A recent article in “Foreign Affairs” by Michèle A. Flournoy has started its thinking about the use of AI in the military with a similar 20 year old example. A small drone flying through a building and inspecting the dangers of entering for persons or soldiers. Since then technology has advanced and the use of AI for automatic detection of dangers and “neutralising” it, is no longer science fiction. The wars of today are a testing ground for AI enhanced military strategies. It is about time that social scientists get involved as well.
Warfare left to robots and AI is unlikely to respect human values unless we implement such thoughts right from the be beginning into the new technology. An advanced comprehension of what algorithms do and what data they are trained on are crucial elements to watch out for. According to Flourney, AI will assist in planning as well as logistics of the military. Additionally, AI will allow a “better understanding of what its potential adversaries might be thinking”. Checking through hours of surveillance videos is also likely to be taken over by AI as the time consuming nature of the task binds a lot of staff, that may be put to work on other tasks. Training of people and the armed forces become a crucial part of any AI strategy. The chances to develop a “responsible AI” are high in the free world that cherishes human rights and democratic values. Raising curiosity about AI and an awareness of the dangers are two sides of the same coin or bullet. Both need to grow together.
(Image created by Dall-E Copilot Prompt: “5 Robots disguised as soldiers with dash cams on helmet encircle a small house where another robot is hiding” on 2024-4-23)

AI Reader

In the middle of the hype around AI it is useful to take stock of the reflection and evolution of AI. In my own analyses and writings on AI it evident that a narrowing of focus has taken place. Whereas before 2022 the writing dealt more with digital technologies in general. The links to the literature on the social construction of technologies was obvious. Algorithms and AI was a part of the broader topic of society and technology.
This has changed. The public debate is focused on “everything AI now”. We look at technological developments largely through the lens of AI now. Hence, my focus of assessments of technology from a societal perspective follows this trend. In a collection of blog entries on AI we try to demonstrate the far reaching changes that have started to have an impact on us. In the last few months the all encompassing concern about AI’s effect on us needs full attention of social scientists, policy makers, companies and the public at large. We can no longer leave this topic to the software engineers alone. By the way, they themselves ask us to get involved and take the latest advances in AI more seriously.
As a “flipbook” the online reading is rather comfortable (Link to flipbook publisher MPL). The pdf or epub files of the blog entries allow to directly follow the links to sources in webpages or other publications (AI and Society 2p 2024-4-18). The cycles of analyses and comments have become faster. Traditional book writing suffers from time lags that risk to make pubications outdated rather quickly. Dynamic ebook writing might bridge the gap between time to reflect and speed to publish or inform the wider public.

AI Travel

Playing around with AI it is nice to test take fun examples. Image you want to plan a vacation, then the use of AI is ready to suggest to you a couple of things to do. Of course, AI is eager to propose travelling services like transport or accommodation to you where it is likely to earn some commissions. So far, the use of the “Vacation Planer of Microsoft’s BING Copilot” is free of charge. In entering the time period and a region as well as some basic activities you’ll receive suggestions with quotes on the sources (webpages of public services from tourist offices mostly). It seems like trustworthy sources and the suggestions of D-Day activities in Normandy is a positive surprise to me. These are popular activities which attract huge international crowds every year.
Thinking further on the potentials it becomes evident that travel suggestions will be biased to those paying for ranking higher on the algorithms selection criteria, which are not disclosed. Entering into the chat with the AI you and AI can target more precisely locations and also hotels etc. You are disclosing more of your own preferences in the easy-going chat and probably next time you will be surprised to be recommended the same activities at another location again.
So far, I have bought travel guides or literature about locations to prepare vacations. This is likely to change. I complement my traditional search or planning with the “surprises” from AI for travelling. I rediscovered, for example, the public service of tourist offices and their publications ahead of the travel rather than the leaflets at the local tourist office. In order to plan ahead there is value in the augmented search and compilation capacities of AI. Drafting a letter in foreign languages is also no problem for AI. The evaluation of the usefulness of AI, however, can only be answered after the vacation. Outdated info or databases have a huge potential to spoil the fun parts as well.

AI and languages

A big potential of AI is in the field of languages. Translations have been an expert domain and a pain for pupils at school. In professional settings translations are an expensive extra service for some or a good source of revenue. AI has shifted the translation game to a new level. In terms of speed of translating large amounts of written text AI is hard to beat. In terms of quality the battle of translaters against AI is still on. For chess players the battle against AI has been lost some years ago already. It remains an open question whether translators can still outperform AI or just adapt to using the technology themselves to improve both speed and quality of translations. The European Union with its many languages and commitment to cultural diversity can serve even more language communities with documents in their own language than before at marginally higher costs. A panel on the 9th day of translations at the „foire du livre de Bruxelles” 2024 expressed their reservations with regard to the use of AI in translation of political text or speech. Misunderstanding and misinterpretation will be the rule rather than the exception with potentially harmful consequences. Checking the correctness of translations is a permanent challenge for translators and can be very time consuming. There is room for an AI-assisted translation, but similar to other fields of application of AI, relying exclusively on AI bears high risks as well. We should not underestimate the creative part of translators to do full justice to a text or speech.

www.flb.be 2024 Translation

AI and PS

AI like in ChatGPT is guided by so-called prompts. After the entry of “what is AI” the machine returns a definition of itself. If you continue the chat with ChatGPT and enter: “Is it useful for public services” (PS), you receive an opinion of AI on its own usefulness (of course positive) and some examples in which AI in the public services have a good potential to improve the state of affairs. The AI ChatGPT is advocating AI for the PS for mainly 4 reasons: (1) efficiency purposes; (2) personalisation of services; (3) citizen engagement; (4) citizen satisfaction. (See image below). The perspective of employees of the public services is not really part of the answer by ChatGPT. This is a more ambiguous part of the answer and would probably need more space and additional explicit prompts to solicit an explicit answer on the issue. With all the know issues of concern of AI like gender bias or biased data as input, the introduction of AI in public services has to be accompanied by a thorough monitoring process. The legal limits to applications of AI are more severe in public services as the production of official documents is subject to additional security concerns.
This does certainly not preclude the use of AI in PS, but it requires more ample and rigorous testing of AI-applications in the PS. Such testing frameworks are still in development even in informatics as the sources of bias a manifold and sometimes tricky to detect even for experts in the field. Prior training with specific data sets (for example of thousands of possible prompts) has to be performed or sets of images for testing adapted to avoid bias. The task is big, but step by step building and testing promise useful results. It remains a challenge to find the right balance between the risks and the potentials of AI in PS.

AI and text

The performance of large language models (LLMs) with respect to text recognition and drafting texts is impressive. All those professions that draft a lot of texts have often decades of experience with using word-processing software. The assistance of software in the field of texts ranges from immediate typo corrections to suggestions of synonyms or grammatical corrections in previous word-processing software.
The improvement of AI stems for example from the potential to suggest alternative drafts of the text according to predefined styles. A very useful style is the “use of easy language”. This rewriting of texts simplifies texts in the sense that longer and more structured sentences are split into shorter ones, lesser-known words or acronyms are replaced by more common or simpler words. Some languages like German have a particular need to use easy language when it comes to administrative regulations and procedures. Public services that aim for inclusiveness of for example older persons or youth can become much more accessible if the use of easy language is spread more widely. Just keep in mind the large numbers of so-called “functional illiterates” (OECD study “PIAAC”) in all OCED countries.
AI can do a great job in assisting to reach a broader public with texts adapted to their level of literacy and numeracy competences. Webpage Designers have made use of Search Engine Optimization (SEO) for years now. The most common way is to use frequently searched keywords more often on your website in order to be found more often by search engines like GOOGLE et al. Additionally, AI allows to explain keywords, sentences or even jokes to you (Spriestersbach 2023 p.111). This may help in situations when cross-cultural understanding is important.
We have made use of optical character recognition (OCR) for a long time now in public services as well as firms and for private archives. AI is taking this “learning experience” to the next level by making use of the content of the recognized text. Predicting the following word or suggesting the next sentence was only the beginning of AI with respect to texts. AI can draft your speech to plead guilty or not guilty in a court. But we shall have to live with the consequences of making exclusive use of it rather than referring back to experts in the field. AI please shorten this entry, please!

AI by AI

It has become a common starting point to use electronic devices and online encyclopedias to search for definitions. Let us just do this for artificial intelligence. The open platform of Wikipedia returns on the query of “artificial intelligence” the following statement as a definition: “AI … is intelligence exhibited by machines, particularly computer systems …“. It is not like human intelligence, but tries to emulate it or even tries to improve on it. Part of any definition is also the range of applications of it in a broad range of scientific fields, economic sectors or public and private spheres of life. This shows the enormous scope of applications that keeps rapidly growing with the ease of access to software and applications of AI.
How does AI define itself? How is AI defined by AI? Putting the question to ChatGPT 3.5 in April 2024 I got the following fast return. (See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.

AI Sorting

Algorithms do the work behind AI systems. Therefore a basic understanding of how algorithms work is helpful to gauge the potential, risks and performance of such systems. The speed of computers determines the for example the amount of data you can sort at a reasonable time. Efficiency of the algorithm is an other factor. Here we go, we are already a bit absorbed into the the sorting as purely intellectual exercise. The website of Darryl Nester shows a playful programming exercise to sort numbers from 1 to 15 in a fast way (Link to play sorting). If you watch the sorting as it runs you realize that programs are much faster than us in such simple numeric tasks. Now think of applying this sorting routine or algorithm to a process of social sorting. The machine will sort social desirability scores of people’s behavior in the same simple fashion even for thousands of people. Whether proposed AI systems in human interaction or of human resource departments make use of such sorting algorithms we do not know. Sorting applicants is a computational task, but the data input of personal characteristics is derived from another more or less reliable source. Hence, the use of existing and newly available databases will create or eliminate bias. Watching sorting algorithms perform is an important learning experience to be able to critically assess what is likely to happen behind the curtains of AI.

AI and dialect

The training of Large Language Models (LLM) uses large data sets to learn about conventions of which words are combined with each other and which ones are less frequently employed in conjunction. Therefore, it does not really come as a surprise that training which uses standardised languages of American English might not be as valid for applications that receive input from minority languages or dialects. The study forthcoming in the field of Computer science and Language by Hofmann et al. (Link) provides evidence of the systematic bias against African American dialects in these models. Dialect prejudice remains a major concern in AI, just like in the day-to-day experiences of many people speaking a dialect. The study highlights that dialect speakers are more likely to be assigned less prestigious jobs if AI is used to sort applicants. Similarly, criminal sentences will harsher for speakers of African American. Even the more frequent attribution of death sentences for dialect speakers was evidenced.
If we translate this evidence to wide-spread applications of AI in the workplace, we realise that there are severe issues to resolve. The European Trade Union Congress (ETUC) has flagged the issue for some time (Link) and made recommendations of how to address these shortcomings. Human control and co-determination by employees are crucial in these applications to the world of work and employment. The need to justify decision-making concerning hiring and firing limit discrimination in the work place. This needs to be preserved in the 21st century collaborating with AI. The language barriers like dialects or multiple official languages in a country ask for a reconsideration of AI to avoid discrimination. Legal systems have to clarify the responsibilities of AI applications before too much harm has been caused.
There are huge potentials of AI as well in the preservation of dialects or interacting in a dialect. The cultural diversity may be preserved more easily, but discriminatory practices have to be eliminated from the basis of these models otherwise they become a severe legal risk for people, companies or public services who apply these large language models without careful scrutiny.
(Image AI BING Designer: 3 robots are in an office. 2 wear suits. 1 wears folklore dress. All speak to each other in a meeting. Cartoon-like style in futuristic setting)

AI and S/he

There was hope that artificial intelligence (AI) would be a better version of us. Well, so far that seems to have failed. Let us take gender bias as a pervasive feature even in modern societies, let alone the societies in medieval or industrial age. AI tends to uphold gender biases and might even reinforce them. Why? A recent paper by Kotek, Dockum, Sun (2023) explains the sources for this bias in straightforward terms. AI is based on Large Language Models. These LLMs are trained using big detailed data sets. Through the training on true observed data like detailed data on occupation by gender as observed in the U.S. in 2023, the models tend to have a status quo bias.
This means they abstract from a dynamic evolution of occupations and the potential evolution of gender stereotypes over years. Even deriving growing or decreasing trends of gender dominance in a specific occupation the models have little ground for reasonable or adequate assessment of these trends. Just like thousands of social scientists before them. Projections into the future or assuming a legal obligation of equal representation of gender might still not be in line with human perception of such trends.
Representing women in equal shares among soldiers, 50% of men as secretaries in offices appears rather utopian in 2024, but any share in-between is probably arbitrary and differs widely between countries. Even bigger data sets may account for this in some future day. For the time being these models based on “true” data sets will have a bias towards the status quo, however unsatisfactory this might be.
Now let us just develop on this research finding. Gender bias is only one source of bias among many other forms of bias or discriminatory practices. Ethnicity, age or various abilities complicate the underlying “ground truth” (term used in paper) represented in occupation data sets. The authors identify 4 major shortcoming concerning gender bias in AI based on LLMs: (1) The pronouns s/he were picked even more often than in Bureau of Labor Statistics occupational gender representations; (2) female stereotypes were more amplified than male ones; (3) ambiguity of gender attribution was not flagged as an issue; (4) when found out to be inaccurate LLMs returned “authoritative” responses, which were “often inaccurate”.
These findings have the merit to provide a testing framework for gender bias of AI. Many other biases or potential biases have to be investigated in a similarly rigorous fashion before AI will give us an authoritarian answer, no I am free of any bias in responding to your request. Full stop.

AI Collusion

In most applications of AI there is one system of AI, for example a specialized service, that performs in isolation from other services. More powerful systems, however, allow for the combination of AI services. This may be useful in case of integrating services focusing on specialized sensors to gain a more complete impression of the performance of a system. As soon as two and more AI systems become integrated the risk of unwanted or illegal collusion may occur.
Collusion is defined in the realm of economic theory as the secret, undocumented, often illegal, restriction of competition originating from at least two otherwise rival competitors. In the realm of AI collusion has been defined by Motwani et al. (2024) as “teams of communicating generative AI agents solve joint tasks”. The cooperation of agents as well as the sharing of of previously exclusive information increase the risks of violation of rights of privacy or security. The AI related risks consist also in the dilution of responsibility. It becomes more difficult to identify the origin of fraudulent use of data like personal information or contacts. Just imagine using Alexa and Siri talking to each other to develop another integrated service as a simplified example.
The use of steganography techniques, i.e. the secret embedding of code into an AI system or image distribution, can protect authorship as well as open doors for fraudulent applications. The collusion of AI systems will blur legal borders and create multiple new issues to resolve in the construction and implementation of AI agents. New issues of trust in technologies will arise if no common standards and regulations will be defined. We seem to be just at the entry of the new brave world or 1984 in 2024.
(Image: KI MS-Copilot: Three smartphones in form of different robots stand upright on a desk in a circle. Each displays text on a computer image.)

AI input

AI is crucially dependent on the input it is built on. This has been already the foundation principle of the powerful search engines like Google that have become to dominate the commercial part of the internet. The crawling of pages on the world wide web and classifying/ranking them with a number of criteria has been the successful business model. The content production was and is done by billions of people across the globe. Open access facilitates the amount of data available.
The business case for AI is not much different. At the 30th anniversary of the “Robots Exclusion Standard” we have to build on these original ideas to rethink our input strategies for AI as well. If there are parts of our input we do not AI to use in its algorithms we have to put up red flags in form of unlisting parts of the information we allow for public access. This is standard routine we might believe, but everything on the cloud might have made it much easier for owners of the cloud space to “crawl” your information, pictures or media files. Some owners of big data collections have decided to sell the access and use to their treasures. AI can then learn from these data.
Restrictions become also clear. More up-to-date information might not be available for AI-treatment. AI might lack the most recent information, if it a kind of breaking news. The strength of AI lies in the size of data input it can handle and treat or recombine. The deficiency of AI is not to know whether the information it uses (is in the data base) is valid or trustworthy. Wrong or outdated input due to a legal change or just-in-time change will be beyond its scope. Therefore, the algorithms have a latent risk involved, i.e. a bias towards the status quo. But the learning algorithms can deal with this and come up with a continued learning or improvement of routines. In such a process it is crucial to have ample feedback on the valid or invalid outcome of the algorithm. Controlling and evaluating outcomes becomes the complementary task for humans as well as AI. Checks and balances like in democratic political systems become more and more important.