The Gen AI tools are nothing without their human conductor. Our lived experiences fuel their outputs. To test this theory, I set myself the challenge of creating a short experimental film using a range of AI platforms. I went from concept to final output in just four days—much faster than the traditional creative process for an animated film, which usually follows a linear timeline of concept, pre-visualisation, offline and online editing.
The film I created, entitled ‘The Boy In The TV’ tells the story of an individual trapped in a cycle of addiction, passed down through generations. The idea likely emerged from my subconscious, spurred by my concern that our kids are spending too much time on screens.
I didn’t choose to make the aesthetic hyper-realistic or filled with 8K detail, as many AI-generated films tend to be. Instead, I wanted to embrace a more analog feel, blending older devices with a near-future aesthetic, perhaps with a nod to Terry Gilliam.
I started by creating key art images of the central character in MidJourney, making tweaks in Photoshop. A conversation over lunch with my wife, C, led to the idea of a boy whose mother was addicted to TV. From there, I wrote lyrics, starting with the chorus ‘Switch it over, switch it off’, followed by the verses. I then input these into Suno, generating different musical tracks and vocal treatments. After numerous iterations, I settled on a track I liked.