Type to search

Features News & Updates

Is AI Cutting the Tether Between Human Musicians and Listeners? 

With the rising use of AI in the music industry, have we reached a point where we prefer artificially manufactured songs and artists to genuine human ones?

Oct 22, 2025
Rolling Stone India - Google News

Illustration by Rudraa Abirami Sudharshan

One of the tropes of the cyberpunk genre is the establishment of an artificially intelligent authority figure that governs its creators — the humans. But slowly, over the course of advancements in AI, this is no longer limited to the realm of fiction. While AI might not have reached the state of sentience, in a lot of ways they are beginning to dominate the human race across various fields. 

In the recent past, the use of AI in creative spaces such as art, animation and writing has caused a huge uproar. AI can create videos, images, and even write entire books in the time it takes to snap your finger. Consciously, people are gravitating towards employing AI. as a tool, even in spaces that should be ideally entirely human.  

And the latest subspace is the music industry.  

Taylor Swift is under fire for the alleged use of AI in the promotional videos for her newest album The Life of a Showgirl. Her fanbase, the “Swifties,” have been quick to hone in on inconsistencies in the footage that appears to be doctored by an artificial hand. Swift had always been on the side of artists against the AI invasion, so it does raise a lot of flags if one of the biggest names in the music industry has possibly gone over to the dark side. 

When The Velvet Sundown debuted in June this year. They put out three albums, crept into playlists and amassed over one million listeners on Spotify in the span of a month. The catch: they weren’t real. Everything from their songs, names, biographies, and images were artificially manufactured. They were ghosts on the internet. It wasn’t until the jig was up that outraged listeners migrated towards artists whose music was more organic. But if they hadn’t been exposed, chances are The Velvet Sundown would’ve gained even more momentum.  

At the OpenAI Korea launch on Sept. 11, a mistranslation of a quote by singer/songwriter/producer Vince, who helped write “Soda Pop” for Netflix’s smash hit, K-Pop Demon Hunters, caused quite the stir. Vince was initially quoted saying that he used ChatGPT for assistance to come up with ways to make the song more catchy. If the more accurate translations are to be believed, his statement is broader and more generalised — he ”occasionally” uses ChatGPT for inspiration while producing K-pop. Regardless of the extent to which he employs it, it does seem like using a cheat code in the creative thought process.  

AKB48, one of Japan’s biggest idol girl groups recently held a televised songwriting contest in which the winning composition would be their latest single. Yasushi Akimoto, the best-selling lyricist of Japan was pitted against an AI version of himself — AI Akimoto. This AI Akimoto was trained on the original Akimoto’s songs to mimic his writing style. Both versions came up with two distinct compositions. The real Akimoto’s song “Cécile” lost to AI Akimoto’s “Omoide Scroll” by 3000 points. In this case, AI had managed to replicate and even beat the original person it was modeled on. 

In the same month as The Velvet Sundown’s debut, MIT published a study that found that users of ChatGPT and other AI Large Language Models (LLMs) displayed lower brain activity compared to those who worked without any external or artificial assistance. LLM users “consistently underperformed at neural, linguistic and behavioral levels”; in other words, the cognitive cost of using AI is steep. Excessive use, going forward, would be detrimental to the instinctual thought process, with creativity becoming a casualty.  

As David Bowie once said, “Always remember that the reason that you initially started working is that there was something inside yourself that you felt that, if you could manifest in some way, you would understand more about yourself and how you coexist with the rest of society. I think it’s terribly dangerous for an artist to fulfill other people’s expectations — they generally produce their worst work when they do that.” 

Have we as a species reached a point where AI knows us better than we do ourselves? Is there no more room for freedom of thought, expression, and creative process that makes us, us? In the three scenarios mentioned above, AI has managed to surpass human creators in seconds. 

Going by what Bowie said, the art of songwriting is deeply personal — the writer is sharing a piece of their soul with the audience. There’s a story, a memory or a moment lodged in there that can instantly strike a chord with the listener purely on the power of human connection. Whether it’s just inspiration or the track’s lyrics, notes and chord progression, if it is generated by AI, then whose story are we really hearing? Why is it even being written in the first place? It makes one wonder —what is the point of listening to anything at all? Has the human race reached a point of cognitive exhaustion where we are unable to formulate thoughts and emotions of our own and convey them? We’ve reached a point in time where we need an artificial entity to tell us what it is that we like, want to like and need to like.  

OpenAI’s Sora 2 is a social media app where users can generate short form videos from seemingly nothing. While Sora 2’s prompts doesn’t allow you to directly use names of known industry figures, there are ways to circumnavigate these guardrails and arrive at a distorted yet clearly recognizable version of a song or likeness of the artist you want to generate. Because these generated tracks and videos are merely modeled from the blueprint of the original, it certainly makes things difficult for the music industry to double down. This isn’t even taking into account how short form videos are a means for smaller artists to be discovered.  

The contemporary audience are becoming slaves to algorithms which decide what’s trending, what’s hip, what’s in and more importantly, what sells. Generic and generated content slinks up earphone cords and fills the soundscape of the mind — it is inescapable. Thanks to short form videos dominating the net, the users’ attention span has decreased to under a minute — if something does not catch their attention as their fingers mindlessly scroll through their feed, it’s forever lost to cyberspace. And because AI works by incorporating what’s popular and overexposed, there’s also a tendency for songs to start sounding similar, with words reduced to synonyms and content repackaged with a shiny new coat of paint. In the process of wanting to be seen rising above the oversaturated depths of the internet, creativity and individuality are killed in favor of being more visible or SEO friendly. It’s a cycle that feeds into itself over and over again. 

It’s the reason today’s generation doesn’t have their own “Bohemian Rhapsody”.  

AI is also able to mimic, with terrifying accuracy, the voice of a person from existing samples. In a strange sort of digital necromancy, AI covers of modern songs by musicians who have long since passed are sprouting up all over social media sites. And while it seems amusing, the horrifying fact remains that there are songs being synthesized by AI from the vocals of musicians who are still alive and in the industry. It can take less than two minutes for a song to be generated and sung, often without consent.  

It isn’t as if voicebanks and singing software are novelties. VOCALOIDs have been around since the early 2000s and function as digital singers for music producers. And while VOCALOIDS can be considered the AI of their time. and Yamaha’s latest VOCALOID6 is even powered by AI technology, they function more along the lines of instruments. These singers need to be tuned properly, like a piano, for them to sound real. However, with the advent of AI, this tuning portion of the VOCALOID software can be entirely bypassed using an AI model. With the push of a button, you can get the VOCALOID to sing instead of spending hours pouring over tracks or painstakingly tuning the notes to get that perfect pitch with the right vibrato.  

The problem with employing AI, even as a tool is that they are trained on pre-existing work — The Velvet Underground was probably where AI got the name The Velvet Sundown. That means that somewhere someone had written something that was repurposed and regurgitated as an artificial product which is now making bigger waves than the original. The waters that AI swim in are also extremely murky: what and whose datasets are they trained on, what are the legal and ethical implications and, more importantly, is this not a form of copyright infringement?   

Unconsciously, today’s listeners tune in to the ghosts in the machine, the formless lines of digital code created from the foundations of human creativity. Perhaps our AI overlords have already taken control of us without our knowledge? 

While AI might come across as the easy way out it, feels as though the tether between the music and the listener is being severed. It’s a flash in the pan before the listener moves on to the next song. In five years’ time, no one will remember it as it’s no longer trending. Is it the distinct lack of human connection that doesn’t cause these synthesized songs to linger in our memories?      

To quote the lyrics to “Video Killed the Radio Star” 

They took the credit for your second symphony 

Rewritten by a machine on new technology 

And now I understand the problems you could see 

Tags:

You Might also Like