It has become a common starting point to use electronic devices and online encyclopedias to search for definitions. Let us just do this for artificial intelligence. The open platform of Wikipedia returns on the query of “artificial intelligence” the following statement as a definition: “AI … is intelligence exhibited by machines, particularly computer systems …“. It is not like human intelligence, but tries to emulate it or even tries to improve on it. Part of any definition is also the range of applications of it in a broad range of scientific fields, economic sectors or public and private spheres of life. This shows the enormous scope of applications that keeps rapidly growing with the ease of access to software and applications of AI.
How does AI define itself? How is AI defined by AI? Putting the question to ChatGPT 3.5 in April 2024 I got the following fast return.
(See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.
AI Sorting
Algorithms do the work behind AI systems. Therefore a basic understanding of how algorithms work is helpful to gauge the potential, risks and performance of such systems. The speed of computers determines the for example the amount of data you can sort at a reasonable time. Efficiency of the algorithm is an other factor. Here we go, we are already a bit absorbed into the the sorting as purely intellectual exercise. The website of Darryl Nester shows a playful programming exercise to sort numbers from 1 to 15 in a fast way (Link to play sorting). If you watch the sorting as it runs you realize that programs are much faster than us in such simple numeric tasks. Now think of applying this sorting routine or algorithm to a process of social sorting. The machine will sort social desirability scores of people’s behavior in the same simple fashion even for thousands of people. Whether proposed AI systems in human interaction or of human resource departments make use of such sorting algorithms we do not know. Sorting applicants is a computational task, but the data input of personal characteristics is derived from another more or less reliable source. Hence, the use of existing and newly available databases will create or eliminate bias. Watching sorting algorithms perform is an important learning experience to be able to critically assess what is likely to happen behind the curtains of AI.

AI and dialect
The training of Large Language Models (LLM) uses large data sets to learn about conventions of which words are combined with each other and which ones are less frequently employed in conjunction. Therefore, it does not really come as a surprise that training which uses standardised languages of American English might not be as valid for applications that receive input from minority languages or dialects. The study forthcoming in the field of Computer science and Language by Hofmann et al. (Link) provides evidence of the systematic bias against African American dialects in these models. Dialect prejudice remains a major concern in AI, just like in the day-to-day experiences of many people speaking a dialect. The study highlights that dialect speakers are more likely to be assigned less prestigious jobs if AI is used to sort applicants. Similarly, criminal sentences will harsher for speakers of African American. Even the more frequent attribution of death sentences for dialect speakers was evidenced.
If we translate this evidence to wide-spread applications of AI in the workplace, we realise that there are severe issues to resolve. The European Trade Union Congress (ETUC) has flagged the issue for some time (Link) and made recommendations of how to address these shortcomings. Human control and co-determination by employees are crucial in these applications to the world of work and employment. The need to justify decision-making concerning hiring and firing limit discrimination in the work place. This needs to be preserved in the 21st century collaborating with AI. The language barriers like dialects or multiple official languages in a country ask for a reconsideration of AI to avoid discrimination. Legal systems have to clarify the responsibilities of AI applications before too much harm has been caused.
There are huge potentials of AI as well in the preservation of dialects or interacting in a dialect. The cultural diversity may be preserved more easily, but discriminatory practices have to be eliminated from the basis of these models otherwise they become a severe legal risk for people, companies or public services who apply these large language models without careful scrutiny.
(Image AI BING Designer: 3 robots are in an office. 2 wear suits. 1 wears folklore dress. All speak to each other in a meeting. Cartoon-like style in futuristic setting) 
Energy Storage
On a sunny and windy day, even in winter or spring, renewable energy is abundant. If demand is stable prices will drop. Prices will rise again as demand for energy picks up. Hence, this is an obvious case for trading opportunities. All you need is … energy storage. All so-called prosumers, short for producers and simultaneously consumers have a lot to gain if they are able to store energy when it’s abundant and cheap. Sell it when it is expensive or use it yourself if needed. Just keep an eye on the costs of energy storage. A stylish insulated carafe is a well known example of storing hot water for astonishingly long time. Insulation is key to store transformed electric energy here. Other options use kinetic energy like pumping water to a higher level and then generate electricity again when the water returns to the lower level. Of course, batteries are a simple way for energy storage as well. Costs seem to come down rapidly and less environmentally hazardous materials leave the laboratory almost every month. It is about time to consider this seriously. More and more cities have understood that energy storage can generate cash for them (Example Feuchtwangen) and appears to be a worthwhile investment for a local power generating community. For the time being my favorite energy storage is the insulated carafe. It is often the beginning of energizing conversations.

AI Collusion
In most applications of AI there is one system of AI, for example a specialized service, that performs in isolation from other services. More powerful systems, however, allow for the combination of AI services. This may be useful in case of integrating services focusing on specialized sensors to gain a more complete impression of the performance of a system. As soon as two and more AI systems become integrated the risk of unwanted or illegal collusion may occur.
Collusion is defined in the realm of economic theory as the secret, undocumented, often illegal, restriction of competition originating from at least two otherwise rival competitors. In the realm of AI collusion has been defined by Motwani et al. (2024) as “teams of communicating generative AI agents solve joint tasks”. The cooperation of agents as well as the sharing of of previously exclusive information increase the risks of violation of rights of privacy or security. The AI related risks consist also in the dilution of responsibility. It becomes more difficult to identify the origin of fraudulent use of data like personal information or contacts. Just imagine using Alexa and Siri talking to each other to develop another integrated service as a simplified example.
The use of steganography techniques, i.e. the secret embedding of code into an AI system or image distribution, can protect authorship as well as open doors for fraudulent applications. The collusion of AI systems will blur legal borders and create multiple new issues to resolve in the construction and implementation of AI agents. New issues of trust in technologies will arise if no common standards and regulations will be defined. We seem to be just at the entry of the new brave world or 1984 in 2024.
(Image: KI MS-Copilot: Three smartphones in form of different robots stand upright on a desk in a circle. Each displays text on a computer image.)

AI input
AI is crucially dependent on the input it is built on. This has been already the foundation principle of the powerful search engines like Google that have become to dominate the commercial part of the internet. The crawling of pages on the world wide web and classifying/ranking them with a number of criteria has been the successful business model. The content production was and is done by billions of people across the globe. Open access facilitates the amount of data available.
The business case for AI is not much different. At the 30th anniversary of the “Robots Exclusion Standard” we have to build on these original ideas to rethink our input strategies for AI as well. If there are parts of our input we do not AI to use in its algorithms we have to put up red flags in form of unlisting parts of the information we allow for public access. This is standard routine we might believe, but everything on the cloud might have made it much easier for owners of the cloud space to “crawl” your information, pictures or media files. Some owners of big data collections have decided to sell the access and use to their treasures. AI can then learn from these data.
Restrictions become also clear. More up-to-date information might not be available for AI-treatment. AI might lack the most recent information, if it a kind of breaking news. The strength of AI lies in the size of data input it can handle and treat or recombine. The deficiency of AI is not to know whether the information it uses (is in the data base) is valid or trustworthy. Wrong or outdated input due to a legal change or just-in-time change will be beyond its scope. Therefore, the algorithms have a latent risk involved, i.e. a bias towards the status quo. But the learning algorithms can deal with this and come up with a continued learning or improvement of routines. In such a process it is crucial to have ample feedback on the valid or invalid outcome of the algorithm. Controlling and evaluating outcomes becomes the complementary task for humans as well as AI. Checks and balances like in democratic political systems become more and more important. 

