Last week, OpenAI CEO Sam Altman said that photos and AI-generated imagery will converge. Given that his intellect receives so much acclaim, it’s alarming that he has no understanding of photography and its function within society, not to mention the far-reaching implications.
During his recent interview with Cleo Abram, Altman was asked about a video of some bunnies on a trampoline that, having gone viral, was then identified as AI-generated. “The threshold for how real does it have to be for it to be considered real will just keep moving,” Altman says, citing sci-fi movies and holiday photos where fellow tourists are deliberately omitted as already being increasingly fantastical.
“It’s just going to gradually converge,” he explains, and, in keeping with a complicit client media that is blissfully devoid of skepticism for risk of being denied access, the journalist doesn’t push this any further. This is already dangerously close to asking Altman questions he doesn’t want to answer; best to move on, and quickly. What’s left unsaid will have huge consequences, not just for the internet, but for society more broadly.
Photos Have Never Been Real, but That’s Not a Reason to Destroy Them
Last year, Patrick Chomet, one of Samsung’s senior executives, defended the generative editing options on one of their latest phones by claiming that “there is no real picture” when it comes to digital photos. “You can try to define a real picture by saying, ‘I took that picture,’ but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop,” Chomet explained.
And he’s right. By definition, an image is not real; it is always a facsimile, sitting somewhere on a sliding scale of truth and subject to competing claims. Constantly navigating this scale, as consumers, we place a huge amount of value in authenticity, and in the idea that it represents something tangible: the ideals that we invest in an image tend to determine its value, and the cute rabbits are the perfect example. For a moment, we believed that someone had reviewed the footage from their doorbell camera and experienced pure glee. As naive viewers, we shared that excitement, imagining for ourselves what that must have felt like. Suddenly, there is the realization that it’s not real, and this shared experience is lost; the footage becomes nothing more than AI slop. The pixels haven’t changed, but their value is transformed.
Why We Hate AI Imagery
A few months ago, I wrote an article about why you can love an image and immediately hate it upon discovering that it’s AI. As a comparison, I explained how, at first glance, impressionist painters lacked skill. “When you learn about why this style emerged and what the artists were trying to achieve,” I wrote, “there is a human connection, a social understanding that has the potential to expand your mind and take you beyond the surface.”
AI-generated art lacks human experience and is inherently un-social. I value a photo of a misty mountaintop because I know that a photographer got up before dawn to hike for hours with a heavy backpack, gambling on the weather, to get the perfect shot. My appreciation of the image comes in part from knowing that process, and knowing what it feels like to stand in awe of the natural world. That photographer felt something, and by viewing the photograph, I have a connection to that feeling. The image is not just a medium of something that is purely visual; it conveys a part of the experience.
By contrast, an AI-generated mountaintop is unadulterated slop. No one struggled, no sense of the sublime was experienced, and consequently, nothing of value was created. This is what Altman fails to see — or perhaps very deliberately chooses not to discuss. He knows that his technology has far-reaching consequences for society, and it’s best not to dig too deep for fear of saying too much. “A higher percentage of media will feel not real, but I think that’s been a long-term trend anyway.” Superficial answer. It will happen anyway. Smile. Handwave. Onto the next question.
Generative AI Is Digital Acid Rain
Altman heralds a new age where, going beyond his lightweight answers, it’s becoming clear that we are destroying the value of imagery and, therefore, undermining the value of human experiences. A tweet went viral this week, describing generative AI as “digital acid rain, silently eroding the value of all information”. Before long, the images we encounter will not be “a glimpse of reality, but a potential vector for synthetic deception,” and it’s important to note that when you trust nothing, it becomes impossible to value anything.
The swill of disinformation and alternative facts has already undermined our notion of truth, but with generative AI, we are slowly killing off our potential to enjoy beauty, too. It’s “the flattening of the entire vibrant ecosystem of human expression, transforming a rich tapestry of ideas into a uniform, gray slurry of derivative, algorithmically optimized outputs.”
AI boosterism doesn’t want to address any of these points, and you can see why. The cracks are starting to appear, as proven by OpenAI’s failure to impress even the most ardent fans with ChatGPT 5, alongside headlines such as “Billion-Dollar AI Company Gives Up on AGI While Desperately Fighting to Stop Bleeding Money” that are suddenly becoming more commonplace.
AI Boosterism Is Out of Control
AI will change society, and we need journalists to start asking more pressing questions. As it stands, the tech industry and its client media have a vested interest in glossing over the negatives, the limitations, and the possibility that this seismic shift might not be anywhere near as beneficial as they promise, and will undoubtedly introduce a raft of unwanted consequences that will transform how we function as a society.
Maybe the headline is wrong. Altman knows what a photograph is, but he’s hoping that you have no idea, in case you start asking difficult questions.