When I was younger, I would often ask a popular question at the time, if you are in a burning museum and you can save only one, an old lady who will most likely die the next day, or an antique painting, which would you choose?
The answers depended on who I was asking, and then I would ask different follow up questions, such as what if you were the old lady? But it was clear that we all appreciate and value art, and for some, it was even more precious than human life.
If you asked me a few years ago, what the problem with AI is, I would have passionately told you about how it kills art.
All that it generates, whether images, music or a piece of writing, is cheap, low quality, lifeless work. Conversely, an artist’s life, experiences and emotions influence and bleed into the art they make, and AI takes all that away. It takes the human joy, opportunity and experience of creating art by depriving us from doing these tasks ourselves, including the planning and the thought involved throughout. I would have said that the same way that if we do not exercise, our muscles can deteriorate, AI eats away at our neurological structures required for critical thinking and creativity. I would have also spoken about the injustice of AI taking pieces of material from the internet and creating a remix of stolen work.
If you asked me the same question a few months ago, I would have further told you about how it is not even a ‘creativity’ problem anymore but a ‘truth’ problem. Not only do we not know what is factual and accurate anymore, but also cannot truly be sure who has really made that piece of art, who has really written that Instagram caption or tweet.
By ‘truth’ problem, I also mean that AI feeds us false information, disrupts our schemas, and destabilises society and our sense of reality by doing so. Most recently, AI was falsely claiming that the image of the mass graves of the 175 victims of the Minab massacre in the 2026 US-Iran war, was in Turkey after the 2023 earthquake. Another time it said it was in 2021 in Indonesia, during the Covid-19 pandemic. Fabricating history in such a way, protects those responsible for killing these school girls, further distorting facts.
While this still bothers me, the conversation today is very different. My biggest problem with AI is that it facilitates warfare, and kills human life. For nearly a decade now, the US administration has used AI tools to determine target priorities and execute strikes. In particular, the US military’s computer systems have been using the AI tool– Maven Smart System– often shortened to Maven.
In a demo of the Maven Smart System at a conference last month, Cameron Stanley, the Pentagon’s chief digital and artificial intelligence officer, demonstrated how easy and simple it was to use. On the screen behind Cameron, a cursor hovered over an overhead image of lined up cars, showing numbers indicating their measurements, locational coordinates and other data. Cameron showed that with a few clicks, “left click, right click, left click”, the “detection” of an object could be moved into a “targeting workflow,”
AI’s role in war is to gather intelligence, plan major bombing missions, select targets to bomb, and ammunition for commanders to use for conducting each mission, but what set the 2026 US-Iran conflict apart was that AI was no longer merely aiding in warfare, but relied on at every stage of the operations. It was deployed in the planning of drone strikes, intelligence analysis, target selection, and cyber defense.
Warfare at such rapid speed would have been impossible without the automated real-time analysis that AI was providing. Previously, without AI, human analysts would spend months gathering and interpreting the same data from satellite imagery, surveillance data, and field recording. In fact, 20 people using the Maven today matches the work of more than 2,000 soldiers in the Iraq war-era.The system works by allowing commanders to use video-game-like abilities to oversee and track battles, and is a joint project that uses surveillance-gathering AI softwares from Palantir and Claude. AI models, like Anthropic’s Claude, are also part of the surveillance and analysis systems which are being used in US military command.
They recommend target prioritisation, and processes millions of data points in minutes. In the war with Iran, AI models gathered the military’s many channels of data, intelligence, satellite imagery and asset movements and combined them into a single software platform, and used this information to suggest hundreds of targets and places to bomb. Subsequently, in the course of a month, the US military hit more than 12,000 targets, with 1,000 in the first 24 hours alone.
Independent AI expert Naim Zamani, explained that “Al is not only used to identify targets on the ground, but also in information warfare and digital deception.” He explains that “Controlling and analysing data faster than the opponent can change the media narrative and affect public morale”. Therefore proving that AI is being used to manipulate public opinion, making one its biggest problems the “truth” problem.
US Secretary of Defense, Pete Hegseth has been public about his desire to use AI throughout the military, with him and Donald Trump both tweeting about wanting to bomb Iran “back to the Stone Age”.
So today, my biggest problem with AI is not that it kills art, the very thing that humans live for, but that it kills human life itself.
When I am reminded of the question of who do you save, the valuable painting or the old woman? It goes without saying that the world we are living in today destroys both the painting and kills the woman.
Post Views: 67
Liked this article? Why not share it?
Related