From Beethoven to the Beatles: Composer Tod Machover on enhancing the power of music through tech
Photo credit: Andy Ryan
Tod Machover, called "America's most wired composer" by the Los Angeles Times, has helped lead a revolution in the staid world of classical music since the days he studied at Juilliard and composed music on a mainframe computer—something unheard of at the time. He has consistently pushed the boundaries of classical music through the use of technology and by the sheer force of his creative thinking. When you think of opera, you might not think of live robot singers and an animatronic set, but Machover has used all that and more in his work. And you certainly won’t find anyone else in the classical music world composing a concerto for cheesesteak and orchestra.
Machover has invented hyperinstruments that let people play music by waving their hands or moving their heads, and they’ve been used by musicians ranging from Yo-Yo Ma to Prince and from young children to seniors. He’s also invented a technology that lets people compose sophisticated original music by simply drawing with lines and colors.
Though he breaks boundaries, he’s received worldwide recognition for his work. He was the first recipient of the Time Magazine/CNN World Technology Award for the Arts, has been a finalist for the Pulitzer Prize for Music, and was Musical America’s Composer of the Year in 2016. He is the Muriel R. Cooper Professor of Music and Media at the MIT Media Lab and director of the lab's Opera of the Future Group. His works have been performed by many of the world's most prestigious ensembles and soloists, including the London Sinfonietta, the Los Angeles Philharmonic, the Philadelphia Orchestra, Houston Grand Opera, Yo-Yo Ma, Joshua Bell, and many others.
We spoke with Machover about his formative years in music, how he blends technology with art, the history of music and computers, and his invention of hyperinstruments.
When did you first get involved in music? Was it always classical music, or were you interested in other kinds of music as well?
I started very early. My mom is a musician. She's a pianist, went to Juilliard, and made her career as a very distinguished, really creative music teacher. I began with her on piano, then changed to cello when I was about 7, and performed pretty seriously growing up.
My parents were also quite interested in experimental music. We had a lot of records of early electronic music and Schoenberg, Boulez, Stockhausen, and Cage. So, until I was about 12 or 13, I was only interested in classical music and strange contemporary music.
Then, when The Beatles' "Sgt. Pepper's Lonely Hearts Club Band" came out, a kind of a light bulb went off in my head. I thought, "Here's something which is straightforward and beautiful and simple, but it's also very complex." So I got interested in rock music starting in my early teens. I used to put wires and headphones on my cello and play it like an electric bass. I also formed a rock band in high school.
How did you develop an interest in technology?
My dad was an electrical engineer and started out as a designer of electronics components and gyroscopes. Pretty early on, he got involved in one of the first companies that made graphic displays. He was a cartoonist, and he loved images. He had this conviction that computers were not going to be useful if they couldn't be used intuitively.
For him, that meant being able to see information on a screen. In those days, you couldn't even put a line or a letter on a screen. It was all punch cards and then printouts. So he was involved in some of the first companies trying to figure out how to let everyone communicate with computers in the most natural way, by manipulating images on the screen.
When did you first combine your interests in technology and music?
It began in high school when I started wiring my cello and trying to change its sound and do multitrack recording. Then I went to Juilliard to write instrumental music. I started imagining things that were almost impossible to play on instruments or had the kind of combinations of textures that I had heard in the wild electronic music of the time or in the Beatles’ music.
That’s when I realized that learning how to program computers would let me take these things I was imagining and make them real. I learned programming pretty quickly; I guess I had absorbed a huge amount from my dad without realizing it. I started doing things with computers in the studio and then began using computers live pretty soon after that.
When you first started using computers for playing music, how unique was that? Was that fairly common, or were you an outlier?
It was very unusual. When I was at Juilliard, there was no other composer there who had any interest in computers. And it wasn't easy to make music with computers at that time because there was no specialized computer for music. There were very few computers anywhere that you could even make sound on.
One of music's major powers is to allow people to communicate—responsively, interactively.
Milton Babbitt was teaching at Juilliard then, and he put me in touch with people at City University of New York’s graduate school who were making audio on their big mainframe computer with punch cards as input. I'd go there once a week and type out the punch cards. Then a week later, they'd give me back the reel-to-reel tapes of what I had done. There was also a printout, and I had to go through and debug it by listening and looking, line by line. It was wild. I don’t think there was another young composer in New York at that point who was writing serious music and also doing anything with computers.
Do you remember what programming language you were using?
I think the first programs I wrote were in Fortran. And then Max Mathews, the father of computer music, wrote a language called Music V. It was in Fortran, but it had the basic operators you needed to make and control audio, like an oscillator, which would play at a certain pitch, and waveforms, so you could change it from "Oooo" to "Eeee.” It was very simple, but at least you didn't have to do that from scratch. So it was probably Music V and basic Fortran I started with.
How has playing music on computers changed since the early days?
Before 1983 and 1984, the big work was done in "how do you make a computer produce sound." What techniques give you the most interesting sounds and most flexible control over them?
Then, in 1983 and 1984, the MIDI [Musical Instrument Digital Interface] standard arrived. Miraculously, all the companies like Yamaha, Korg, and Roland that were making musical instruments decided that they would standardize the way these instruments talked to each other and, in turn, talked to any computer. So you could specify in a program what a note was, what the loudness was, what the timbre was, all very easily and clearly. And in one year, things that had only been available in a few places on these gigantic expensive machines all of a sudden were available to everybody. It actually became easier to compose on the early Macintosh computers connected to a Yamaha Dx7 digital synthesizer than it was to compose with pencil and paper, sitting at a Steinway grand. You know, you could do notations, you could write a little program and play something back automatically.
But there was a negative side to that. Before things got standardized with MIDI, there were all kinds of complex sounds being made and original ways of thinking about how to write a program to generate really innovative musical structures. Once MIDI came out, certain standard things like regular rhythms and normal scales and things that sounded a lot like traditional music became very easy to do. And then anything that was deviant, that sounded like an experiment, became very hard to do. So the ability to make music on computers became available to everybody, but in a lot of ways, it became very commercial very quickly and less interesting.
Did that lead you to make hyperinstruments?
Yes, because things were getting extremely rigid and commercial and kind of impersonal, and everything sounded the same. So I thought back to the epiphany I had when "Sgt. Pepper's" came out. I realized one reason the music was so great was it was combining all these layers, the Beatles singing and playing but also the sounds of nature and cows and machines, all combined through the magic of studio technology.
I also realized it was the first album that anybody made that was designed only to be a studio album. It was when the Beatles decided that they didn't want to perform anymore. They were tired of it, they were playing in larger and larger stadiums, they couldn't hear each other, and they realized that making this album in the studio with a new kind of tweezers, carefully shaping every sonic detail bit by bit, was a new world.
So, when I heard it, I loved it, but I also thought there was something wrong, because one of music’s major powers is to allow people to communicate—responsively, interactively—at a very deep level. A lot of it is done spontaneously, whether you're performing for a group of people and your performance changes depending on the reaction, or you're jamming with friends or you're experimenting. There are certain things you can do in a studio that are great, but you also have to be able to touch music and react to it and share it.
And so, when I saw what MIDI and the personal computer could do, I said, "Maybe now's the time to have the best of both worlds." You know, is it possible to use all these great things that this new technology can do? But can we also make it respond to what a performer or a composer intuitively wants to suggest, modifying and enhancing in the spur of the moment?
In a lot of ways, it was sort of an extension of what my dad had wanted to do with imagery. I was looking for a way of making it incredibly natural for somebody to use their musical abilities—at whatever level they are, whether they’re Yo-Yo Ma or a kid, a rock star, or a senior with Alzheimer's—to shape a musical environment and do it spontaneously and emotionally and iteratively. You might want to do it in your studio. You might want to do it live on stage. You might want to do it collaboratively. You might want to do it in all kinds of ways.
Basically, that's what a hyperinstrument is. I realized you could write software that would analyze what somebody played on a keyboard, or the way you moved your arm or the way you touched the cello, and interpret that information and then change the results on the other end. So I could change a cello into a whole orchestra, or I could change a melody and vary it and add all kinds of extra layers or ornamentations or even change its direction, like diverting a river.
I call them hyperinstruments because I want them to have some of the essential qualities that music instruments have. Music instruments are some of the most sophisticated technologies that exist because they allow deep intentions and feelings to come out through our bodies. If you play an instrument, you're not thinking about where your fingers are or how your right hand and your left hand coordinate. It becomes second nature so that your body can translate what your intention is.
Do hyperinstruments always start with the musician playing a traditional instrument, or are there other ways of playing them?
They’re both. You can extrapolate from a traditional instrument. Let’s say somebody has had a stroke or suffers from cerebral palsy, or they're a child and they've only taken three lessons. You can look at what they're trying to do and get a pretty good idea of where the next note or rhythm is supposed to be. You can take what they're doing and make it better without it sounding artificial.
Hyperinstruments can turn anything into music. They’re limitless. So I could wave my arms or I can smile or grimace and turn that into sound. We have software called Hyperscore, designed originally for 5- to 12-year-olds. You draw, and it looks at the shape of a line and its color and turns that into compositions. Push a button and it transcribes the pieces into musical scores that real musicians can play. It’s very easy to learn, but it's quite sophisticated. So we have young kids write pieces for symphony orchestras or rock bands.
We've talked about using computers to express yourself and to change the way music is made, but we haven't talked about computers composing music. What do you think about the state of AI-created music? Do you think AI will ever supplant human composers?
I just heard an NPR interview with the writer Yuval Noah Harari, who wrote the book "Sapiens" and a new one called "21 Lessons for the 21st Century." And he said that AI is advancing very quickly, but what hasn‘t happened is developing AI that has any sense of consciousness—that is, consciousness as feeling or intention.
I think he’s right. Art is about sharing experiences and meaning. And I don't see computers being very good at that, ever. I don't see any evidence that technology is getting good at caring about something or understanding it on a human level.
One final question: What are your three favorite pieces of music, aside from your own?
One would be by Beethoven, one by Bach, and one by the Beatles. For Bach, I’d say the Mass in B minor, which I think is maybe the best single piece ever written. For Beethoven, I'd say the String Quartet No. 14 in C sharp minor, Opus 131, one of his last pieces and where he took classical music into a kind of imaginary, spiritual realm that really nobody else has ever reached. And for the Beatles, there are a lot of pieces to choose from, but I’d say "A Day in the Life."
More on music and computing
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.