From Fast Food to Fast Art: Do We Care?
In late 2025, we’re living in a time where anyone with a laptop and a few prompts can release a debut single. That’s exactly what happened this week with Silas Hart, a young newcomer from England, whose track Believer is featured as part my latest Sounds from The Unknown: New Music Mondays releases.
But here’s the twist. Silas isn’t real. Not in the traditional sense. He was co-created by me and a suite of generative AI tools: ChatGPT for lyrics, MidJourney for his initial portrait, Freepik Spaces for consistent video creation, and Suno for the music itself. The entire track, persona, and rollout came together in under a day. The result? A synth-tinged power ballad with emotional resonance and polished production. All of it manufactured, yet strangely still affecting.
Maybe, we have been here before, just in slower motion?
Think about somewhere like Sun Studio in Memphis. Mid 1950s. Small room, hot lights, one microphone doing a lot of heavy lifting. People like Elvis Presley, Johnny Cash, Jerry Lee Lewis walking in, often cutting live takes in a single afternoon. The technology captured the performance. It did not reshape it much. What you heard on the record still smelled a little like sweat and cigarette smoke.
Then the studio itself started to become an instrument. Motown in Detroit turned songs into carefully crafted machines. Different writers, different players, a house sound that could make almost any voice feel like it belonged inside that world. Phil Spector pushed this idea even further with his Wall of Sound, packing every inch of the recording with layered instruments and echo, so the song became a kind of sonic architecture rather than just a document of a band in a room.
By the time you get to producers like Quincy Jones or Trevor Horn, you can really feel the shift. Quincy shaping albums like Off the Wall and Thriller into complete experiences, where every track is tuned for impact, not just recorded. Trevor Horn leaning into early samplers and digital gear, building tracks where the studio, the machines and the edits are as important as the band. The voice is still human, but it is carried by an increasingly artificial environment.
From there it is not a big leap to the era of manufactured acts and mediated performances. The Monkees cast for television first and music second. Hit factories like Stock Aitken Waterman pumped out chart-toppers with factory precision in the 80s and 90s polishing artists into brands. We had Milli Vanilli fronting someone else’s recorded voice. ABBA performing again, only this time as holograms. The Gorillaz filling arenas with animated avatars. We got used to the idea that the person on stage and the person on the track might not fully line up.
That is why the Greatest Showman and The Fifth Element feel less like anomalies and more like signposts. In The Greatest Showman one of the biggest emotional songs “Never Enough” arrives through a split performance. Loren Allred’s voice, Rebecca Ferguson’s body and face. Two people, one character, one moment that still lands. In The Fifth Element an alien diva sings an aria that no human could perform in one pass. Real soprano, Inva Mula recorded the pieces, then technology stitches them into something impossible. The emotional feeling survives the edit.
So by the time the Beatles released Now and Then, in 2023 with John Lennon’s voice cleaned up and recovered by AI, or a project like AISIS recreates the feel of Oasis without the band, it does not feel like a break with history. It feels like the next step in a long line of collaborations between humans, machines and myth.
Which is what brings us back to Silas, and to this strange creative space I am currently exploring.
What happens when the stitching disappears. When there is no human take hidden underneath the processing. No singer in a booth at all. Only a model, a prompt, and the choices I make as I sort through the results. And somehow, people still feel something.
The rise of “fast art” feels a lot like fast food. It’s easy to make. It hits certain cravings. It looks right. But it’s different from the slow, imperfect, unpredictable texture of a live performance. And maybe that’s exactly why we’ll see a return to the real. Not because AI isn’t good enough, but because people will start to miss the mess of it all. The sweat. The silence before the chord change. The voice cracking on the high note.
Of course, even those “real” spaces are tangled in tech. Live, real-time auto-tune smoothing out imperfections. Visuals timed down to the millisecond. Backing tracks tucked behind live vocals. We’re already half in, half out. AI didn’t create the blur. It just sharpened the contrast.
Maybe we need something like the old Parental Advisory stickers. Not as a warning, but as context. A subtle cue. This song was written with a model. That performance was crafted via prompt. No shame in it. Just a little transparency. Maybe it matters. Maybe it doesn’t.
Because when Silas Hart sings about heartbreak, some listeners feel it. They connect to the voice. They fill in the rest.
And that’s what matters, isn’t it?
If the emotion lands, does it matter where it came from?