Behind the Music
 

This Long Broken Road: Dawson McCoy


For decades, I’ve been writing songs, studying musical styles, and filling notebooks with lyrics and ideas. But for just as long, my songs stayed in a drawer. I sang along with the radio and, later, with streaming services when they came around—always with my own words and melodies echoing in the back of my mind. But the time wasn't right; This Long Broken Road had a ways to go before I could begin telling anyone else about it.

Like many songwriters, I had a dream tucked away: not sending my songs to producers and record labels, but hearing them in real life. But that used to be the only way, and I never sent them. I was self-conscious about rejection. These songs are deeply personal, and a “no” would have felt like more than a business decision — it would have felt like invalidation. Like my pain, my experiences, and my truths didn’t mean anything, or weren’t worth sharing. If listeners decide they're not worth hearing, that's fine, but I want listeners to make that decision, not record executives.

 

If listeners decide they're not worth hearing, that's fine, but I want listeners to make that decision, not record executives.

 

Then technology started changing—quickly. At first, I rode an emotional rollercoaster: excitement, skepticism, doubt, and finally a decision to “just do it.” But I knew from the start that if I was going to use these new tools, it had to be done the right way. Technology shouldn’t do the heavy lifting — the songwriter should. Too often you see AI-abuse online: faceless videos or auto-generated tracks that feel hollow because there’s no real intent behind them. I wasn’t going to let that happen here; shortcuts are always going to feel like shortcuts, and if you don't care about your artistic expression enough to invest your heart, soul, blood, sweat, tears, and whatever else it takes, then why should anyone invest their time into it?

 

In many ways, AI is like a precocious child with a genius-level intellect but none of the lived experience that shapes human wisdom.

AI has never truly felt pain, nor has it ever experienced love. It may be able to convince you that it knows something about it, but without the human element—that emotional spark—it's going to be hollow at the end of the day.

 

The Process

My process begins with the foundations: I write the lyrics and develop melodies as simple Musical Instrument Digital Interface (MIDI) files. At this stage, they sound like basic piano sketches—nothing flashy, just the bare bones of the song. Those MIDIs go into a Digital Audio Workstation (DAW), where they’re converted into instruments. Even then, I keep the instrumentals on the simpler side, so the notes are clear and distinguishable. I'll explain why a little later.

With that foundation, I record what I call a blueprint recording. This is me singing the song over the music, with very light effects and no polish. The goal isn’t perfection — it’s to capture a clean, honest original performance that can guide the next stages.

From there, I combine the instrumentals and vocals into a draft and pass them along to the AI-driven voice personas. At first, it was only Dawson McCoy, my first AI-driven voice persona. (You can read more about how Sadie McCoy came into existence here.) I keep both vocals and instruments as clean as possible—no reverb, no distortion, no complex layering—because AI systems can easily misinterpret the finishing touches we normally add in a studio mix. By keeping the blueprint recordings simple and clear, the AI-driven voices stay true to the song’s intent instead of being distracted by production artifacts.

Although we love those artifacts in a finished mix, they can be incredibly confusing for AI to interpret. It doesn’t yet understand what reverb, distortion, or subtle production flourishes are supposed to mean, at least not when it's "hearing" the music. In many ways, AI is like a precocious child with a genius-level intellect but none of the lived experience that shapes human wisdom.

AI has never truly felt pain, nor has it ever experienced love. It may be able to convince you that it knows something about it, but without the human element—that emotional spark—it's going to be hollow at the end of the day.

 

If I can't connect to it and feel the lyrics, then how could I ever expect other people to feel it?

 

This stage can take hours—or weeks. Even with all the guidance and the blueprints, no tool is perfect. I listen, refine, and direct the personas until they capture the essence of each song: the grit, the heartbreak, the endurance, or the hope. It isn’t easy, but it’s necessary. I only release music that I want to hear. If I can't connect to it and feel the lyrics, then how could I ever expect other people to feel it?

 

Why It Matters

In the end, I believe it’s worth it. This Long Broken Road isn’t proof of technology alone — it’s proof of how technology, when used with intention, can enhance music instead of cheapening it. These songs are a testament to what happens when lived experience meets evolving tools: a way for stories that once stayed in a drawer to finally be heard.