Your New Favorite Song Wasn’t Written by Anyone…
Arden Rae did not emerge from a label’s talent scout or years of band rehearsals. She was born at the intersection of imagination and artificial intelligence. Her debut single Phoenix captures that origin story: a voice and presence rising out of prompts, pixels, and code to become something undeniably real.
The journey began with a vision. Her likeness was first shaped in MidJourney, with prompts guided and refined through ChatGPT until a distinct personality and aesthetic came into focus. Arden’s story and artistic identity were mapped out in words before a note was played. From there, ChatGPT was used again to sketch the outline of a modern, powerful, female-led anthem songs that echo strength, resilience, and reinvention. That blueprint became the seed for Phoenix.
Suno, the generative AI music platform, transformed this vision into sound. Using the crafted outline and lyrics refined in ChatGPT, the system produced Arden Rae’s voice, her melody, her instrumentation, and her mix. What would normally take producers, vocalists, engineers, and weeks of studio time was realized in hours by someone who hadn’t played an instrument since middle school.
Suno itself has become a phenomenon. Created by the Cambridge-based startup Suno, Inc. and released publicly in December 2023, the platform allows anyone to describe a song in natural language and, within minutes, produces a full track complete with vocals, instrumentals, mixing, and cover art. Its pitch is simple: democratize music creation by giving polished tools to people with no formal training. Since launch, Suno has rapidly evolved, generating longer and more refined outputs. By mid-2025 it could produce eight-minute songs with richer vocal nuance and better prompt comprehension. Critics acknowledge its polish but note that the results often lean into generic, Western pop tropes, reminding us that even advanced AI benefits from human refinement.
The technology behind Suno is proprietary, but its foundations are shared with other text-to-music systems. A language model first interprets the user’s request, parsing out genre, tempo, mood, and theme. This data steers audio synthesis models that transform descriptors into music and lyrics. Diffusion methods reconstruct sound from noisy spectrograms, while autoregressive transformers predict audio tokens step by step. Suno is believed to use models like Bark for vocals and lyrics and Chirp for instrumentals, stitching the results together in a post-production stage. The system is probabilistic, which means the same prompt can generate different songs. Its strengths lie in mainstream styles, while experimental genres often highlight its limitations.
The rise of AI music also brings legal and cultural tension. Suno has admitted in court that it trained on tens of millions of recordings scraped from across the internet, including copyrighted works. The Recording Industry Association of America and major labels filed lawsuits in 2024, accusing Suno of unlawful copying to build its models. Suno defends itself by claiming fair use, comparing its training process to search engines indexing content. Courts have not yet resolved whether this argument will stand. Ownership of generated songs is also contested: Suno grants commercial rights to paying users, but the U.S. Copyright Office has consistently refused to recognize works created solely by AI. The boundaries of authorship, originality, and protection remain unclear.
For all its novelty, Suno is not entirely new in cultural terms. Pop music has always been industrialized. Stock, Aitken and Waterman in the 1980s churned out hit after hit by following a formula of bright hooks and synthesized beats. The Milli Vanilli scandal of 1990 forced the public to confront authenticity, when fans discovered the duo hadn’t sung their own songs. Today, hit tracks often list more than a dozen writers and producers, fragmenting authorship into a committee effort. Suno compresses all of this into one system, offering the illusion of authorship to the user while most of the creative labor is automated.
This efficiency brings both promise and risk. Hobbyists can make professional-sounding music for the first time, while established artists can use AI as a sketchpad for ideas. Yet the sheer volume of generated tracks could overwhelm platforms and further destabilize the fragile economics of songwriting. The parallels to pop history are clear. Music has always moved between formula and originality, authenticity and artifice. What changes now is the speed and scale. A handful of producers once shaped the charts. Today, an algorithm can generate thousands of songs in a single day.
Arden Rae embodies this shift. She is more than a proof of concept. She represents a new kind of artistry, one born from collaboration between human imagination and machine intelligence. Phoenix is both her first single and her origin story, a track that rises from prompts and code yet still carries the power to move an audience. In her voice we hear both the possibilities and the questions of this moment. What makes a star—talent, technology, or the story we choose to believe? And when you hear a song, do you value the human behind it, or is it enough that the music simply resonates?