In Appreciation of MIDI How a 30-year-old electronic communication remains on the bleeding edge of music technology

I asked DeSantis if the company was worried about the unexplored legal ramifications of someone pulling note data from a copyrighted piece of material: extracting a MIDI melody from a Beatles song, for example. He replied, “Anything we could do that might restrict people from making ‘bad’ ethical decisions with third-party material would invariably detract from the usability of Live in other ways as well. We want to give more musicians more possibilities to do more things with musical material, and that’s pretty much all we think about.”

From what I’ve seen with the new audio-to-MIDI feature so far, I tend to agree that Ableton doesn’t have much to worry about. Early adopters who are already posting instructional videos on sites like YouTube have mostly kept the feature relegated to capturing and manipulating their own self-created MIDI data. Derek VanScoten, an electronic music producer who uses Ableton extensively, confirmed that, at least for him, the feature has more to do with workflow, utility, and inspiration than it does with trying to reap the reference of someone else’s famous melody.

“I’ve taken isolated Rhodes [electric piano] lines from a breakdown,” VanScoten recounted, “performed [audio-to-MIDI conversion], and then altered the key, changed it to an arpeggiator patch, and used it for a verse.” VanScoten also said his life as a commercial music artist was greatly improved by the feature: “I can take the client’s reference track, and then flip it just enough to make it work.”

The ethical questions around sampling predate Ableton, and any tools for manipulating digital media simply make easier what people were already doing in the analog domain anyway.

When I asked whether he had any reservations about other artists being irresponsible with the technology, he said happily, “[I] was really skeptical of audio-to-MIDI completely ruining the game. Now I’m really open to any form of creative genius.”

For his part, DeSantis pointed out that the audio-to-MIDI feature was designed in an effort to inspire artists to create sounds, not enable them to steal sounds. “[A] big goal for Live 9 was to focus on different ways to help musicians in the early stages of the music-making process — when ideas are coming quickly, but are also not necessarily fully-formed and are difficult to pin down,” DeSantis said. “For a lot of Live users, existing music (either from records or from their own instrumental or vocal recordings) can serve as the catalyst for a new song, and we wanted to provide a more flexible way to work with this material.”

DeSantis did want to make it clear that, although Ableton Live 9 is approaching sampling music in a different way, they still have a great respect for the art form. “I think of [Ableton Live 9’s audio-to-midi feature] as the next generation of sampling,” said DeSantis. “Some musicians sing or play guitar or program drum machines, but others work with existing material as their ‘instrument.’ Until audio-to-MIDI, this meant that these musicians were faced with a real limitation — the notes were inextricably bound to the sound itself. Now, you can take the notes and repurpose them with any sound. Additionally, because MIDI is an inherently more flexible medium than audio, you can also do things like revoice chords, or extract just a kick drum pattern from a breakbeat, etc. These are all creative possibilities that aren’t easily possible within the audio domain.”

And it’s true: these things simply weren’t possible before, and the fine-tuned ability to extract notes from a chord or melody allow those with a basic knowledge of composition and theory to approach their computer-based music in a much deeper way. Folks have been using Ableton’s audio tools to deconstruct inspirational music and learn from their conventions for years. Now, the audio-to-MIDI feature allows people to quickly integrate this learning process into their existing workflow.

According to VanScoten, “One [other way to use the feature] is for transcription. Sometimes I’ll take a really thick piano chord from a jazz record. I can usually hear about 90% of it, but sometimes audio-to-MIDI may help me fill in the final gaps.”

Perhaps the genius of the audio-to-MIDI feature is this focus on technical precision. The MIDI format established an elegant way to isolate individual musical components and reduce them to mathematical values. By employing this method in real-time conversion, Ableton has managed to sidestep criticisms of pastiche altogether. You can make a hip-hop song with the breakdown from Pink Floyd’s “Money,” and squinting social commentators can question whether your fans really like your original piece or just the original tune of “Money.” But if you extract those melodies and chords in the form of MIDI data and manipulate them to your heart’s content, you’ve employed outside material only as a source of inspiration, never imitation. Plus, you’ll probably (hopefully) fly under the radar of Pink Floyd’s copyright attorneys.

I recently spoke with Dave Smith, one of the creators of MIDI, about these new uses of his technology. In the late 80s, Smith moved on from Sequential Circuits and contributed to the development of other electronic instruments. Notably, he helped develop the first software-based synthesizer for use with a PC. Eventually, Smith established his own hardware instrument company under the name Dave Smith Instruments. Synthesizers like the Mopho, Prophet ‘08, and Prophet ‘12 — not to mention drum machines like the Tempest — have revitalized the hardware-focused community in recent years. However, he still has a lot to say about this cutting-edge software technology.

First of all, he’s much less forgiving on conventional audio-sampling musicians. “I think there’s a clear difference between what we might call ‘audio sampling’ and ‘note sampling,’” said Smith. “The former has been an ongoing issue for years and is more clearly a ripoff, as the samples get longer and are more easily identified.”

He does see the technical potential in products like Ableton Live 9. Smith went on to say, “Automatically recovering the notes, though, is something that has been done manually for years by musicians. We all used to listen to records, often slowed down, to learn a guitar or keyboard riff. Having an automatic method to provide what is basically sheet music doesn’t seem to have the same level of stealing as audio sampling. At worst it’s a shortcut […]”

However, it was clear Smith still prefers the “human” feeling of a person playing an instrument: “From my experience, playing MIDI files of a song, no matter how accurate, pales compared to the real thing. I’ve never been too interested in that application of MIDI!”

DeSantis, for his part, made sure to mention how unconventional uses of the audio-to-MIDI feature are producing results that surprised the developers and prove the technology can be pushed beyond just playing a simple MIDI file. “One thing that we hear a lot, and that’s really exciting, is that people often get inspiring results from inaccurate conversions,” DeSantis said, “Some recordings (like full mixes, for example) are outside of what these tools are meant to do. But of course people are converting them anyway, and often end up finding amazing passages of new music that then become the start of a great song. The feature is generally really accurate when used with well-recorded, simple material. But it’s always nice to hear that people are also getting great results by using the feature ‘wrong.’”

Much like with Dr. Rodger’s example of the Yamaha RM1X, Ableton Live 9 users are using their ingenuity and community discourse to develop techniques for the technology beyond the designers’ imaginations. The user base is already developing their own language and methodology around this feature, constantly bumping into interesting possibilities through continued exploration and experimentation. VanScoten happily quipped, “We all have our ‘happy mistakes.’”

As we pulled onto the interstate, my father-in-law asked me seriously, “So, what is it that you’re doing when you’re pressing buttons on that thing?”

My own father was generally horrified with most forms of electrified and electronic music, so he and I didn’t have much to talk about when it came to the music I consumed and created. When I was 15 or so, he came home early to find me pressing my electric guitar’s pickups against the face of my cranked-up amplifer to generate squalls of feedback, one of my first experiments with “noise music” before I had learned there was an official genre. He was expectedly dumbfounded and enraged: “Why would you choose to listen to racket?” Hearing my high school rock band perform with our electric guitars and drums, he commented that the whole thing just sounded like a noisy mess to him. The few times he heard some of the synthesized electronic and dance music I began to explore, he shook his head with disbelief: “It’s fake. It sounds artificial.”

So on this recent late afternoon, when my father-in-law asked me an open-ended question as we drove to a family dinner, I stammered a bit for a response. He had recently attended a show I played at a local bar, performing electronic dance music produced with Ableton Live and controlled with an Akai APC40, a dedicated Ableton hardware controller. He was curious how the ways I pressed buttons, moved sliders, and twisted knobs affected the sounds he heard. Apart from EDM-initiated friends and bandmates, I had never been asked a detailed question about electronic music before — certainly never from someone I considered an “elder.” The question had always been “Why are you doing that?,” never, “How do you do that?” [Note: To be fair, unlike my father, my father-in-law never had to contend with coming through the door after a long day of work to the unrelenting squall of a maladjusted teenager intentionally and unaccountably creating feedback noise with a guitar amplifier.]

While my father-in-law isn’t necessarily a lover of electronic music, he does have an analytic mind and a deep appreciation of computers. He’s worked closely with IT for decades, seeing operating system after operating system succeed one another and appreciating the improvements each new method of computation brought. As I described the powers Ableton Live gave me — chopping audio, rearranging audio, extracting MIDI information, using that MIDI information to drive synthesizers, sampling those synth sounds, etc. — he nodded and asked clarifying questions occasionally about how the computer program accomplished all these tasks. At the end of our conversation, he said with happy wonder, “So you can pretty much make whatever sound you want, huh?” Whether or not electronic music is necessarily his favorite genre, he clearly appreciated the technical victories that made such things possible.

Debates over the artistic validity of reappropriating samples will never die. For every artist who uses chopped audio to create compelling compositions, there’s an academic pundit who calls that process creatively bankrupt and a lawyer insisting it’s a statutory infraction. Using the cast of characters from my personal experience, I attribute this naysaying attitude to people like my father, who need to hear and see a wood-and-metal instrument played with a live human’s fingers or lungs to appreciate sound as legitimate music. I’d prefer more people think like my father-in-law, who appreciates the wealth of opportunities afforded by advancements in technology and will give a listen to whatever resulting pieces strike his fancy, no matter how they were created.

In this ongoing cultural argument, Ableton Live 9 and other software programs that pioneered the relevant technology have found a “middle way.” Beyond simply giving users ways to divide and resequence audio files, this ability to reuse a piece of audio’s musical core, expressed as MIDI data, allows a musician to get all the inspirational benefits of traditional audio sampling with few of the legal and artistic concerns. By using MIDI data instead of the actual audio file, we can remove the tone and timbre from music and use the actual notes to drive our own sounds. We can strip away the skin of sound and use only the skeleton as inspiration. And, with the ability to modify the MIDI note data manually, we can even restructure this skeleton, creating a beast hardly resembling its ancestor. The MIDI extraction process gives us all the inspiration and none of the theft.

Ableton is one company among many who have pursued audio-to-MIDI as a technological advantage. However, Live 9 is unique in that it incorporates this technology into a purposefully fluid, improvisational, performance-oriented workflow. Perhaps more than any other platform currently on the market, it uses audio-to-MIDI conversion to achieve the kind of organic musical technique Dr. Rodgers wrote about in her defense of electronic music. As DeSantis and VanScoten point out, musicians are already experimenting and forming their own styles and modes with the conversion process, pushing it beyond the imagination of the original developers. In this way, the software is leading to a musical discipline that, in keeping with Rodgers’ arguments, is just as tactile, creative, and valid as physical instrument practices like fingerboard placement and proper drum stick grip.

I hope others follow Ableton into this territory and give musicians new, engaging ways to take advantage of the infinitely usable MIDI format. By doing so, these software developers will be doing more than just creating powerful computer programs. In giving users the ability to develop their own musical techniques, they will create new software-based instruments, capable of all the nuance and discipline of their wood-and-metal counterparts.

Special thanks to Dave Smith, Derek VanScoten, and Ableton representatives Cole Goughary and Dennis DeSantis for their assistance and cooperation.

Most Read