AI Collusion

In most applications of AI there is one system of AI, for example a specialized service, that performs in isolation from other services. More powerful systems, however, allow for the combination of AI services. This may be useful in case of integrating services focusing on specialized sensors to gain a more complete impression of the performance of a system. As soon as two and more AI systems become integrated the risk of unwanted or illegal collusion may occur.
Collusion is defined in the realm of economic theory as the secret, undocumented, often illegal, restriction of competition originating from at least two otherwise rival competitors. In the realm of AI collusion has been defined by Motwani et al. (2024) as “teams of communicating generative AI agents solve joint tasks”. The cooperation of agents as well as the sharing of of previously exclusive information increase the risks of violation of rights of privacy or security. The AI related risks consist also in the dilution of responsibility. It becomes more difficult to identify the origin of fraudulent use of data like personal information or contacts. Just imagine using Alexa and Siri talking to each other to develop another integrated service as a simplified example.
The use of steganography techniques, i.e. the secret embedding of code into an AI system or image distribution, can protect authorship as well as open doors for fraudulent applications. The collusion of AI systems will blur legal borders and create multiple new issues to resolve in the construction and implementation of AI agents. New issues of trust in technologies will arise if no common standards and regulations will be defined. We seem to be just at the entry of the new brave world or 1984 in 2024.
(Image: KI MS-Copilot: Three smartphones in form of different robots stand upright on a desk in a circle. Each displays text on a computer image.)