For those following the development in robotics we have been astonished by the progress of, for example, rescue robots. After an earthquake such robots could enter a building that is about to collapse and search the rooms for survivors. A recent article in “Foreign Affairs” by Michèle A. Flournoy has started its thinking about the use of AI in the military with a similar 20 year old example. A small drone flying through a building and inspecting the dangers of entering for persons or soldiers. Since then technology has advanced and the use of AI for automatic detection of dangers and “neutralising” it, is no longer science fiction. The wars of today are a testing ground for AI enhanced military strategies. It is about time that social scientists get involved as well.
Warfare left to robots and AI is unlikely to respect human values unless we implement such thoughts right from the be beginning into the new technology. An advanced comprehension of what algorithms do and what data they are trained on are crucial elements to watch out for. According to Flourney, AI will assist in planning as well as logistics of the military. Additionally, AI will allow a “better understanding of what its potential adversaries might be thinking”. Checking through hours of surveillance videos is also likely to be taken over by AI as the time consuming nature of the task binds a lot of staff, that may be put to work on other tasks. Training of people and the armed forces become a crucial part of any AI strategy. The chances to develop a “responsible AI” are high in the free world that cherishes human rights and democratic values. Raising curiosity about AI and an awareness of the dangers are two sides of the same coin or bullet. Both need to grow together.
(Image created by Dall-E Copilot Prompt: “5 Robots disguised as soldiers with dash cams on helmet encircle a small house where another robot is hiding” on 2024-4-23)