Minimalist scene: A silhouetted figure sits by a rain-streaked window wearing headphones, watching gray afternoon light. A steaming coffee cup and Miles Davis vinyl record rest on a nearby table. Between the figure and window, warm golden musical notes (human music) float with deliberate space around them, contrasted with translucent blue ghost notes (AI music). The Japanese character ma (間) is visible, representing the meaningful emptiness between notes. Soft amber lamp light glows from one side. Muted blue-gray palette with sepia and gold tones.

#63 | ElevenLabs’ new AI music album. Boring?

TL;DR: ElevenLabs released an AI music album with Grammy legends. After listening to all 13 tracks, I put on Miles Davis and heard the silence AI forgot.

🎵 Happy Friday,

On Tuesday, ElevenLabs released something called The Eleven Album.

Thirteen tracks of AI-generated music featuring Liza Minnelli, Art Garfunkel, and Michael Feinstein—actual Grammy winners, not virtual avatars.

Also, including some producers with 5 billion streams and AI-native artists who exist only as animated characters.

You can ​listen to it here​.

The genres span everything. EDM, Brazilian funk, folk-electronic fusion, jazz standards. There’s even grunge performed by a virtual rabbit from the year 3045. (I’m not making this up.)

Wednesday afternoon, I decided to listen to the whole thing. I made coffee—strong, the way I always do when headphones go on—and sat by the window.

It was cold and raining lightly, the kind of rain that doesn’t demand attention but makes you aware of being inside.

The music started. Liza Minnelli is doing EDM at 79. It sounded expensive, like someone spent real money on production.

Art Garfunkel’s track had these layered electronic elements woven into folk melodies. Everything felt carefully constructed.

But an hour later, I couldn’t hum a single melody.

I remembered the feeling, though. Like watching that rain through the window. You can see it happening, but there’s glass between you and it. Present but distant.

So I decided to keep listening. Not just this album—AI music in general.

Different platforms, different artists. Fifty songs, then a hundred, then two hundred.

I wanted to understand what I was hearing. Or maybe what I wasn’t hearing.

When your body disagrees with your mind

Scientists hooked people up to sensors and played them AI-generated music. Pupil dilation, skin conductance, and heart rate. You name it.

Weirdly, the sensors showed that people’s pupils dilated more to AI-generated music than to human-composed music.

Their bodies registered more excitement. When asked directly, subjects said the AI tracks felt “more interesting.”

Those same listeners kept listening. They listened to 50, 100, and 200 songs. And by the end? They got bored.

“Getting tired of AI music,” someone wrote on Reddit. “They’re all starting to sound the same. Boring. Generic. Bland.”

Your body says one thing. Your mind says another.

The Eleven Album has this quality. Initially, your attention is drawn to something. A rhythm. A production choice. The novelty of hearing Liza Minnelli over electronic beats.

But it doesn’t stay with you. You walk away feeling like you ate something that filled your stomach but didn’t satisfy your hunger.

What works (and what doesn’t)

The instrumental track on the album—Demitri Lerios’ piece—got praised by critics.

“Could fit a film score,” they said. And they’re right.

It works perfectly as a background. Functional. Competent. The kind of music you’d hear in a hotel lobby or a car commercial.

The vocal tracks? Different story. “Generic,” one reviewer wrote. “Derivative.” “Lacking essence compared to human EDM.”

That word stayed with me. Essence.

Liza Minnelli spent her whole career unable to interpret contemporary music convincingly. She’s a cabaret performer, rooted in 1940s Broadway standards.

Now, at 79, she’s jumping eighty years forward to EDM with AI’s help. Technically impressive but emotionally hollow, unfortunately.

In my opinion, AI music works when you’re not listening. Background ambiance while you work. Study music. Coffee shop soundtracks. It fills the silence efficiently.

But the moment you pay full attention—really listen, the way you’d listen to an album you love—it falls apart.

Coffee versus conversation. One gets you through the morning. The other makes you remember why you wanted to wake up.

But why does it fall apart when you listen closely?

The space between notes

The Japanese have a word: ma (間). It means the space between things. The pause. The gap. The silence.

In music, ma is where the meaning lives. Not in the notes themselves—in the emptiness around them.

Miles Davis understood this.

He’d play a note, let it hang in the air for three full beats, then play another.

The pause mattered more than the sound. That hesitation, that choice not to play—that was the music.

Bill Evans, same principle.

Listen to “Peace Piece” and count the spaces between piano notes. The silences carry the beauty. The breath before the next chord. The decision to wait rather than fill.

AI music doesn’t have ma.

It fills every space with sound. Steady rhythm. No hesitation. Every gap smoothed over, every pause optimized out of existence.

I went back to The Eleven Album with this in mind. And I heard what wasn’t there. Every moment is filled. No room for silence. No choice about what to leave empty.

When you listen to an AI-generated piano piece and then Bill Evans, the difference isn’t about technical skill. It’s intentionality.

One sounds like someone deciding what matters. The other sounds like prediction algorithms following statistical patterns.

Why the emptiness matters

Music exists because people are lonely.

That’s not poetic—it’s functional. One lonely person reaching across time and space to another lonely person, saying, “I felt this too.” You’re not alone.

AI music can’t do this.

Not because it lacks technical ability, but because nothing’s at risk. No one poured confusion, heartbreak, or joy into it. No one’s vulnerability is on the line.

When Miles Davis played that three-beat pause, he was making a choice about beauty. This matters. This doesn’t. Wait here. Feel this empty space.

That pause is someone saying: this is what I think is beautiful, and I’m willing to be wrong about it.

AI has predictions. It doesn’t have opinions about beauty. And it definitely can’t be wrong about beauty, because it’s never right about it either.

What I think will happen—and this is just observation, not prediction—is that AI music won’t replace human music.

It’ll make human music more precious.

Vinyl didn’t die when digital streaming took over. It became a ritual.

People who stream everything still buy records. Not because vinyl sounds better—it pops, scratches, and skips.

But because playing a record is an act. You choose. You’re present.

When AI music is everywhere—cafes, stores, lobbies, algorithm-generated playlists—human music will become the thing you choose when listening actually matters.

AI will be the default. Human music will be the choice.

Maybe your ear training won’t be about understanding AI. Maybe it’ll be about recognizing which music deserves your full attention.

What to listen for

Try this: put on Miles Davis or Bill Evans. Count the pauses. Notice where they choose not to play.

Feel the breath between notes. The moment where the rhythm almost breaks—and then holds.

That hesitation is a human saying: this matters.

AI fills space. It predicts patterns. It optimizes for what usually comes next.

But music isn’t about patterns.

It’s about choice. Someone deciding this note, not that one. This silence, not continuous sound.

Once you train your ear to hear ma, you can’t unhear it.

The difference becomes obvious—not as judgment, but as recognition. Like learning to tell the difference between coffee and conversation.

I finished The Eleven Album. Thirteen tracks of competent, polished, forgettable sound.

Then I put on Bill Evans. “Peace Piece.” A song I’ve heard hundreds of times.

And I heard something new. The spaces between the piano notes. The breath. The choices about what not to play.

The intentional emptiness that makes the notes matter.

It had always been there. I just hadn’t noticed it was missing until now.

That’s what AI music taught me. Not how to make music. How to listen to it.

Cheers,

Mark
The AI Learning Guy
👋⚡😎

Interesting Sources

Note: No single website has all the answers. This list serves as a starting point for those who want to explore or satisfy their curiosity about AI. Links: Links with * are affiliate links. See disclosure below.