The Final Instrument

What is the impact of an instrument that can create or replicate any conceivable sound? The synthesizer has challenged the music world since its invention in the 1950s. As technological advancements accelerate, musicians like me must reconcile them with our own creativity.

Share:

Listen to the story:

A Boy, a Battery, and a Bathtub

I survey a dizzying array of knobs and buttons, face-to-face with my newest creative challenge: the synthesizer. It’s an instrument that produces sound waves electronically. At their base, sounds are vibrations which can be visualised in waves. The more complex the sound, the more complex the wave.

Given an exhaustive (dizzying) set of controls (knobs and buttons), a synth player has the power to manipulate the produced sound waves seemingly to no end. This means, in theory, that the synthesizer is capable of creating or replicating any conceivable sound…

…no pressure.

In any hobby or practice, there are precipices that test us. We reach a cliff, overlooking a new challenge. This alien instrument is my latest test. If I played piano, this particular synth’s 37 white and black keys might afford me some comfort, but I’m not a pianist. Any stroke of excellence I’m able to produce with the instrument will surely be a fluke. Yet, many great creations were just that: accidents.


1876 – Highland Park, Illinois

Elisha Gray would have been the inventor of the telephone if he hadn’t waited until the afternoon to visit the patent office, allowing Alexander Graham Bell to beat him to the punch that morning. Maybe Gray slept in that day, or maybe he got into an argument with his wife, Delia. Perhaps his work on the patent application was slowed when he found some of the prototype’s pieces were missing. Gray found the missing electrical components with his nephew. But before snatching them away, he would have heard something. Something barely audible, and not quite natural. As the story goes, Gray’s nephew had attached one end of a battery to the bathtub, and was holding the other end. The boy discovered that with this configuration, when he rubbed his hand against the bathtub, he would produce a faint hum. The more force he rubbed with, the louder the hum that would result. This discovery served as the basis for an invention Gray called the “Electro-Harmonic Telegraph.” In a nutshell, it featured piano keys that would trigger electric currents to vibrate small electronic components called oscillators. The vibrating oscillators produced tones in proportion to the electric currents they were fed. With that, the world owed its first electric instrument to a boy tinkering with cutting edge tech in the bathtub. 

Lessons in Creativity

When I was five, I’d pull the pots and pans from our kitchen cupboards and bang on them with wooden spoons, images of rockstar drummers behind 14-piece kits flashing through my mind. My parents took this as a sign to put me in drum lessons, which I continued throughout most of my childhood. I was a drummer, but the true beginning of my journey creating music came much later.

In middle school, my drum teacher had moved away, and I didn’t find myself particularly driven to find a new one. Drumming didn’t feel exciting anymore; it felt like work. After all, it wasn’t metronome-bound rhythm exercises that got me to pull out the pots and pans years ago it was the colour and eccentricity of rockstars. I didn’t particularly find quarter notes, time signatures, and paradiddles very musical.

As a pre-teen, one of my favourite pastimes was fiddling with my Dad’s iPad. I don’t remember exactly how I got there (curiosity, boredom, maybe even just an accident) but, I opened an app called GarageBand. The app prompted me to select from a collection of instruments. My selection brought me to a virtual drum kit.

GarageBand’s default acoustic drum kit

I felt my eyes widen. Tapping each of the drums and cymbals, I heard their sounds play back through the iPad speakers. Wonder and excitement washed over me. It felt like the first time I sat down at a real drum kit nearly a decade ago. I tapped and tapped; something about the digitized drums had me entranced. Hours passed.

This was the first time I can remember reaching this level of hyper-focus. When I get into this state, I’m usually doing something creative, most often when I’m making music. Some magnetic, creative idea will pull me in and quietly engulf my awareness, then suddenly I regain consciousness hours later with a two-minute piece of music in front of me. Waking up from these flow states, I only have hazy memories of having put the music together. It’s not me creating it’s like the music already exists in some other realm, and chooses to enter our world through my creative daze.


c. 1938 – New York City

Robert Moog found it difficult to focus during his piano lessons. It was only at the behest of his mother that he kept practicing. His real interests lay in radio technology. He grew up in a time when engineers were discovering how to harness the powerful, invisible waves. In his spare time, Moog was either reading an electronics magazine or on a trip to Radio Row, a one stop–shop for parts and equipment, with his father, an engineer.

Outside Heins and Bolet, Radio Row’s oldest store, 1936.

He picked up his father’s knack for drafting and building early in life. In one magazine, Moog discovered schematics for a theremin. The instrument is named after a Russian physicist called Leon Theremin, who invented it while researching proximity sensors for the Soviet Union. Unlike most instruments, the theremin is played without being touched. It generates tones based on input from a sensor, which users can move their hand around to manipulate the sound.

Leon Theremin demonstrating how his invention is played in 1954.

By the age of 14, Moog had built his own theremin, and after graduating high school, he drew up his own schematics for the instrument. He went on to sell theremins throughout college with the help of his dad. Moog gained a reputation as a skilled engineer and physicist, but would be the first to admit he was a shoddy businessman. He started building instruments as a hobby, and could care less about the purpose of a balance sheet. He kept his business afloat by paying close attention to suggestions from his target market: musicians.

While Moog continued to build and sell his creations, the RCA Mark II Sound Synthesizer was gaining attention as a leap forward in electronic instruments. It was designed and built by engineers at the Radio Corporation of America (RCA) and was paramount in its field. For Moog, it was too big (about seven feet tall and 20 feet wide), too expensive to make (costing over $500,000), and too complex to use. It was a scientific marvel, yes, but the average musician was hopeless to compose anything with it. Put a professional pianist in front of the Mark II and for all they knew, they were looking at an alien spaceship’s motherboard. The Mark II’s labyrinth of knobs and cords were entirely unmusical. While it seemed RCA’s fifteen minutes of fame were ticking away, Moog worked on his own design for the instrument.

Waking the Music

Throughout my teens, I continued exploring GarageBand, experimenting with other instruments and creating full songs. Eventually, I upgraded to a music creation software (called a Digital Audio Workstation, or DAW) on my desktop computer called FL Studio, which I still use. These days, the DAW’s interface is like wallpaper to me. I can intuitively work its countless tools and plugins, having pushed and pressed the familiar sliders and buttons thousands of times. But this familiarity took time.

The first time I fired up FL Studio was about six years ago, and it discouraged me. I sat with my brow furrowed at my computer, scanning the windows and menus, searching for any hints of recognition. Nothing. So, naturally, I started clicking at random, perhaps hoping I’d stumble upon the option to change the software’s difficulty level to “easy” (that’s not a feature, as it turns out). I kept clicking. I knew there was great music hiding somewhere in the black box . Finally, I pressed a button and onto my screen popped a step sequencer. Aha, I thought to myself, I know what this is! Step sequencers are used to arrange sounds in a grid-like fashion, similar to sheet music.

FL Studio’s step sequencer

It was the main tool I used to make songs in GarageBand. I clicked together a pattern using the program’s default drum sounds. Kick. Snare. Kick, kick. Snare.

Nearly an hour into using my new DAW, I’d figured out how to do something that would have taken me less than a minute in GarageBand. Despite that, I felt less discouraged than I did an hour ago. Proudly listening to my special drum beat, I thought to myself, it’s simple, but it’s mine, so I love it.


1964 New York City

Musicians, scientists, and engineers attending the Audio Engineering Society convention saw the introduction of the Moog Synthesizer. Moog’s invention was controlled by a keyboard, and was exponentially smaller and cheaper to make than the RCA Mark II Sound Synthesizer. While much of the design still featured a motherboard-esque interface, musicians were pulled in by the keyboard. Without knowing what specific controls did, adventurous musicians could aimlessly twist knobs and patch cords to experiment, while the familiarity of piano keys kept them oriented. With enough experience, users could create unique and specific sounds with ease. The Moog Synthesizer was beginning to make knobs and cords as musical as guitar strings and piano keys. In the bustling crowd, Moog met Wendy Carlos. Carlos was an innovative composer, and when she looked to the future of music, she saw the reign of electronic instruments. She was well versed in physics and electronics, but understood that most musicians were not, and, in fact, could not care less about such things. So, Carlos pitched Moog more ways to make his creations less technical and more musical. There was beautiful, important music slumbering within electronic instruments, but if a larger world of musicians were going to hear it, they’d need to continue stripping away a barrier of buzzing mathematical circuits and snaking cords.

In its early years, the synthesizer was confined to experimental music scenes. Though Moog worked to create a design that was appealing to musicians, most still wouldn’t learn of the instrument for another few years, and many who did learn of it just weren’t interested. The music world of the ’60s had just warmed up to the electric guitar, and did not seem ready to embrace another spin on a classic instrument. Still, forward-thinking musicians like Wendy Carlos experimented.

Wendy Carlos saw an early prototype of Moog’s synthesizer before its unveiling and gave him a laundry list of suggestions to help tailor the instrument to musicians like her. After its official release, Carlos mastered the Moog and dreamt of launching the synthesizer into mainstream music. In 1967, she recorded a collection of works by Johann Sebastian Bach on the Moog. Carlos imitated the acoustic qualities of the original pieces’ orchestral instrumentation, creating fully synthesized renditions. She released the album Switched-On Bach the following year. 

Album cover for Switched-On Bach, 1967

Some classical purists interpreted the album as a needless perversion of historic music, but it received high praise overall, winning three Grammy Awards and selling over a million copies.

“No Synthesizers!”

Seven years, countless hours of trial and error, and quite a few YouTube tutorials later, I’ve gotten pretty good at using FL Studio. What once was a black box has become a comfortable grey. It’s where I go to express and experiment. It’s also where I go to grow. I’ve had a lot of interests over the years, but none have nested in me like music has. There are few things I can look back at my progress on and feel satisfied the way I do with my DAW. I still play the drums, and I’m okay at them, but my home studio, with its digital drumkits, pianos, guitars, and plugins is my instrument of choice.

Playing drums fills me with a different feeling than clicking together a drum pattern on my computer, and it’s a feeling I like. There’s something special about hearing instrumentation I know came from a human being that’s feeling every bit of it as they play. Sometimes I wish I could put my heart into the music I make in that way. Yet, I cherish the blank canvas digital music production offers me.


1965 – London, England

Most people wouldn’t rush to put their dentist on the guest list for a dinner party. But George Harrison had become good friends with his, and did just that. Also on the guest list: John Lennon. After the group had enjoyed dinner, the dentist would slip a special ingredient into their evening coffee, and it wasn’t fluoride. The Beatles were no stranger to mind-altering drugs, but LSD was a whole new level.

It’s hard to say just how much The Beatles’ alleged drug use was responsible for their boundary-pushing sound. After all, they were an experimental bunch, even before the drugs. When the synthesizer stepped onto the scene, their minds were open to it. Synths were featured all over their eleventh studio album, Abbey Road. You’d think the biggest band in the world using synths on arguably their most famous album would banish any skepticism artists had left for the instrument, but some noses remained turned up.

As the ’70s began, a new band formed a few cities over from The Beatles and stole the rock genre’s crown with the release of their self-titled album. Queen pushed the genre’s boundaries with an exciting and theatrical sound. They experimented with different genres, vocal deliveries, and song structures, but drew the line at synths. If you bought a Queen record at the peak of their popularity, you’d find written on the inside sleeve a note that read, “No Synthesizers!” The band’s guitarist, Brian May, had mastered creating wild sounds with his electric guitar. He felt that synths were so stiff and mechanical, it was difficult to put any emotion into playing them. The rest of the band agreed, and didn’t want any of May’s innovative playing to be mistaken for such a lifeless instrument. 

Perhaps, at the time, it was hard to anticipate just how much the synthesizer would saturate the music world. Queen eventually embraced the synth, and in 1979, Michael Jackson released Off the Wall. It was the King of Pop’s reign now. Jackson’s breakout album had synths on nearly every track. It was no longer necessary to rely on electric guitars or familiar keyboard sounds to create striking melodies and textures. The instrument was cemented in popular music.

Fireflies Friday

In my early teens, I had a paper route. Twice a week, I’d load up my wagon, pop in my earbuds, and spend about two hours delivering papers up and down my street. I listened to a ton of hip-hop music and fell in love with the genre’s use of sampling: taking a section of audio from another source and using it in a new work. Breathing new life into an existing piece of music was exciting and inspiring to me. I’d scour the small sample libraries on GarageBand, searching for a classic jazz or soul sample I could flip in true hip-hop fashion. To this day, I enjoy taking popular or nostalgic songs and remixing them.

A few years ago, I fell in love with the song “Fireflies” by Owl City. The plucky synth that begins the song is unmistakeable, and it’s hard not to anticipate the first few lines and sing along. I’ve always been drawn to artists like Adam Young (the man behind Owl City) who produce and record their own songs. Tracks like “Fireflies” paved the way for genres like Bedroom Pop and Hyperpop. The song’s lyrics describe Young’s struggle with chronic insomnia, something I’ve also dealt with. The sound, the story, the cultural impact it all hit close to home. I had to express it somehow, so I ripped the song from YouTube and got to work. One creative coma later, I proudly listened back to my remix. But, one remix could hardly express how special the song was to me. So, the next week I made another… and another… and another. For six weeks, I released a new “Fireflies” remix every Friday. When I finally felt satisfied, I called the project “Fireflies Friday,” and moved on.

When I would tell people about my special little project, I found that they rarely shared my excitement. And not just because of the song of choice. I get it. “Fireflies” gets a lot of flak for being a cheesy, old, overplayed song. That I can live with. But I felt I was expressing how much I loved the song with each installment of Fireflies Friday, and to others this expression was overshadowed by the cultural weight of the original. Sampling and remixing are often seen as lesser forms of creativity for this reason.


With 13 Grammy Awards and countless nominations under his belt, Pharrell Williams is a strong force in the music industry. He is praised for his musicianship and production talent across a wide range of genres. When Pharrell has something to say, the music world listens. Two of his Grammys came from his feature on Daft Punk’s 2014 hit “Get Lucky.” When asked about the growing popularity and influence of Electronic Dance Music (EDM), he said, “Who’s responsible for EDM? Daft Punk are.” Pharrell credited the electronic music duo for defining a massive genre. Daft Punk’s genius came from their intermingling of sampling and synthesized instruments. The duo popularized the use of tools that blend vocals with synthesizers, like vocoders and the talk box. The talk box sends sound from a synth through a tube the player puts in their mouth. This, in effect, replaces the player’s voice with their synth performance, as the mouth does the work of forming vowels and enunciating consonants. The most notable use of the talk box is in Daft Punk’s “Harder, Better, Faster, Stronger.” Vocoders are digital plugins that analyze vocal input, and allow users to replace the tonal quality of their vocals with any sound they choose (typically a synth). This cuts out much of the live performance aspect required by the talk box and gives producers more control over the resulting sound.

The duo’s innovative sound nudged electronic music into the spotlight and set a precedent for artists embracing new synth technology in their music. It’s ground-breaking moments like this that properly acclimated many musicians to technological advancements in their field. One such advancement was born of the Music Technology Group at Barcelona’s Pompeu Fabra University, in partnership with Yamaha. The group set out to create a piece of software that would fully synthesize vocals, only requiring notes and written lyrical input from the user. In 2005, Vocaloid hit the market. The tool exploded in Japan, as one of the voice options users could select was a Japanese girl they named Hatsune Miku. Though not to the same degree, Vocaloid eventually saw some use in the West as well. American electronic artist Porter Robinson featured AVANNA, another voice option, heavily on his breakout album, Worlds. Fans were infatuated by the album’s nostalgic, video game-esque feel, to which AVANNA contributed a great deal.

The Final Precipice

It was about two years ago that I sat down at a synthesizer for the first time. Again, I found myself at a cliff, overlooking my greatest creative challenge so far: a small budget synth. I was tempted to slouch backward into the same attitude Queen had in the ’70s. I’m happy with the sounds I’m working with already, thank you! But another voice gave me a familiar reminder: there’s great music hiding in this black box. I pressed the keys, and heard the default saw-tooth wave play back to me in different pitches. I turned the filter knob, and heard it shave off the wave’s harsh, high frequencies in real time. Of all the synths controls, the filter was the only one I had much familiarity with. The rest of the switches, knobs, and buttons are where my adventure began.

Through experimenting with different physical and digital synthesizers, I’ve come to depend on their versatility. If there’s a ceiling for what’s possible with the instrument, I’ve never glimpsed it. I doubt anyone has. The ceiling is likely elevating faster than we can keep up. With the accelerating development of AI technology, its overlap with synthesis was inevitable. Synplant is a software synth that uses AI to synthesize variations of any sound you feed it. Audimee is a service that can convert your vocals into a whole host of AI-synthesized singers. As the list of tools grows longer and denser, musicians’ attitudes swing back and forth like a pendulum. Some advancements are praised for their capture of the creative spirit, and some are lambasted for a disdainful failure to do so.

I sit with my synths and experiment almost every day. I make use of tools that excite me. While their capability and complexity may grow, it will never snuff out my creative spirit. Even if what I’m capable of becomes borderline elementary compared to what some future AI-musician can synthesize, I’ll be able to think to myself: it’s simple, but it’s mine, so I love it.

Liam Nikkel

Liam’s home studio is his favourite place to be. He likes to think outside the box and tell stories through music, videos, writing, and graphic art. He enjoys pastel colour palettes, pretty paintings, his friends, and the smell of vanilla.