Asking people about differences between private and public hospitals, you are most likely getting answers that the private hospitals deliver superior patient outcomes. Whereas private hospitals seem to have a positive stigma attached to them, public hospitals commonly have a negative stigma. Scientific evaluations are helpful to set the record straight again. The study published in “The Lancet Regional Health” in 2024 shows that in the simple descriptive statistics on several patient outcome indicators, this is what the data showed between 2026 and 2019. However, a more precise statistical analysis reveals that there is also a selective admission to the private and public hospitals in England. Using so-called instrumental variables approaches that account for the selection process between admission to the 2 types of hospitals (private versus public) most of the differences between the hospital types disappear. The underlying mechanism is a sorting of different patients into the private or public hospitals. Put in easy words, for a routine intervention people tend to chose the private hospital, but the more rare and difficult operations were more likely admitted to public hospitals. The number of co-morbidities (heart disease) is also of importance as they might negatively affect patient outcomes. Jumping to conclusions and reinforcing stigma about public or private provision of services hinders progress and an equitable provision of services.
The analysis of a potential selection bias can reveal the “creaming” effect of private provision of (health) services. Just caring for the “easy” or routine cases and avoiding the more difficult and costly cases has economic advantages, but for society as a whole the costs overall remain the same. A good public service in health is a definite asset.
(Image: Exposition Isa Genzken 2023 in Neue Nationalgalerie Berlin)















The AI ChatGPT is advocating AI for the PS for mainly 4 reasons: (1) efficiency purposes; (2) personalisation of services; (3) citizen engagement; (4) citizen satisfaction. (See image below). The perspective of employees of the public services is not really part of the answer by ChatGPT. This is a more ambiguous part of the answer and would probably need more space and additional explicit prompts to solicit an explicit answer on the issue. With all the know issues of concern of AI like gender bias or biased data as input, the introduction of AI in public services has to be accompanied by a thorough monitoring process. The legal limits to applications of AI are more severe in public services as the production of official documents is subject to additional security concerns.
(See image). ChatGPT provides a more careful definition as the “crowd” or networked intelligence of Wikipedia. AI only “refers to the simulation” of HI processes by machines”. Examples of such HI processes include the solving of problems and understanding of language. In doing this AI creates systems and performs tasks that usually or until now required HI. There seems to be a technological openness embedded in the definition of AI by AI that is not bound to legal restrictions of its use. The learning systems approach might or might not allow to respect the restrictions set to the systems by HI. Or, do such systems also learn how to circumvent the restrictions set by HI systems to limit AI systems? For the time being we test the boundaries of such systems in multiple fields of application from autonomous driving systems, video surveillance, marketing tools or public services. Potentials as well as risks will be defined in more detail in this process of technological development. Society has to accompany this process with high priority since fundamental human rights are at issue. Potentials for assistance of humans are equally large. The balance will be crucial.













