AI has entered the racing of cars after we have been racing horses, dogs and camels for many decades. The fact behind all these races is the huge market for gambling. Anything you can bet on will do for juicy profits in that industry. The recent “Abu Dhabi Autonomous Racing League” is the latest addition to the racing craze. Moving online with 600000 spectators at its peak on video and gaming platforms the investment seems promising. The only problem, AI is not yet ready to really compete with the world of real drivers. The progress, however, is astonishing. Just one lap of 2 minutes on the circuit yields 15 Terrabyte of data from 50 sensors. These are closed circuits so no person can enter or animal can get in their way. The challenge to integrate more data and faster processing as well as algorithms for fast decision making is steep. Great learning opportunities for advances in robotics. The hype has not been able to live up to the expectations as no real racing took place yet. We have replaced the gladiators of the Roman empire with Formula 1 drivers. It is only fair to retire those drivers soon and let AI race cars against each other. It feels like a computer game on screen and it is as we shall most likely watch these races on a screen as well. Hence, what is the point. Watching youth on TWITCH play racing games will probably not change the viewing behavior of the masses. The programmers have nevertheless great learning opportunities and will find their way rapidly into the job market. The other challenges of ASPIRE seem more important for humanity like human rescue and food for the growing world population. In the meantime let the boys play around with cars and learn about potentials as well as failures of AI-programmers and dealing with both.









The AI ChatGPT is advocating AI for the PS for mainly 4 reasons: (1) efficiency purposes; (2) personalisation of services; (3) citizen engagement; (4) citizen satisfaction. (See image below). The perspective of employees of the public services is not really part of the answer by ChatGPT. This is a more ambiguous part of the answer and would probably need more space and additional explicit prompts to solicit an explicit answer on the issue. With all the know issues of concern of AI like gender bias or biased data as input, the introduction of AI in public services has to be accompanied by a thorough monitoring process. The legal limits to applications of AI are more severe in public services as the production of official documents is subject to additional security concerns.
(See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.



