Fear Factory’s Dino Cazares Says Selling Your Voice to AI Might Be the Only Way Forward

Eliza Vance
By
Eliza Vance
Eliza specializes in the celebrity side of the rock/metal sphere, examining inter-artist relations, social media trends, and fan community engagement. She expertly interprets popular culture through...
5 Min Read
Photo Credit: Gary Wolstenholme/Redferns via Getty Images

Fear Factory guitarist Dino Cazares recently shared his perspective on how musicians can adapt to the rise of artificial intelligence in the music industry. He offered practical advice in an interview with Heavy Metal On Line.

Cazares discussed the evolving landscape of AI technology and its impact on musicians. He suggested ways artists can potentially benefit from these changes rather than simply being displaced by them.

“Well, we’re gonna have to learn how to adapt, and some of the A.I. music programs have,” Cazares said. “So, basically, you could sell the rights to your voice, so these A.I. programs have your voice.”

-Partnership-
Ad imageAd image

He explained how this adaptation could work financially for musicians.

“So if somebody wants to hear your voice or something like your voice to be used on a record or for a song or whatever, these A.I. programs will pay you a royalty,” he continued. “So there are some companies that are adapting to it and some musicians who are adapting to it, but for the most part, we’re pretty much all getting ripped off. And it’s been that way for many years. Not necessarily in the A.I. programs, but getting ripped off in the music industry for many years.”

Cazares also acknowledged the current reality of AI’s presence in music distribution and consumption.

“People are using all the music programs, actually using the songs, and they’re putting it out on Spotify and everywhere else,” he said. “I hear it on digital radio all the time. So it’s here. I mean, we can all complain about it and talk shit about it, but until we actually get rid of all of our electronic devices, nothing’s gonna change.”

The guitarist emphasized how deeply integrated AI has become in daily life.

“For many years, it’s already been adapted into our life, so most people don’t even notice it,” he explained. “Everything that we say and do and post — comments, pictures, videos — A.I. learns from that. And it’s learning more and more. And it’s much smarter than we are, and it’s gonna get even more smarter.”

Cazares’ observations align with significant developments in the AI music industry that have emerged over the past two years. The legal landscape surrounding AI voice licensing has undergone major changes. These changes established clearer frameworks for artist compensation and consent.

Soundverse reported that international tribunals beginning in early 2026 established that AI developers must secure consent for voice model training. This moved away from punitive measures toward proactive licensing structures. Legislative amendments in the UK, Japan, and South Korea have embedded voice rights into broader AI regulation. These amendments define how models could learn from human voices.

Several platforms have already implemented the type of structured compensation frameworks that Cazares described. Soundverse noted that their six-stage compliance model includes recurring compensation through a partner program where artists receive ongoing royalties when their voice DNA contributes to generated works. Meanwhile, Aiode, a US-based startup, develops virtual musicians based on real artists. Musicians contribute directly to the modeling process and earn through revenue-sharing tied to their virtual counterparts.

The market growth supports Cazares’ assertion that AI integration is inevitable and expanding rapidly. StartUs Insights found that the generative AI in music market is projected to reach USD 3.6 billion by 2033 with a 28.6% compound annual growth rate from 2024 onward. User-generated content is increasing rapidly on AI music platforms. Udio, a Google DeepMind spinout, attracted over 600,000 users within two weeks of its launch in 2024.

The legal requirements for transparency that Cazares indirectly referenced have also become more formalized. Soundverse revealed that courts have established that the burden now lies on creators and platforms to identify synthetic voices with clear labeling systems. Watermarking and traceability have become integral features of compliant AI operations. Every exported audio file contains transparent watermark markers to ensure traceability across distribution networks.

Share This Article