Synthesizer 101: Basics of Synthesizer Programming

author: Cavalcade date: 09/25/2013 category: the guide to
rating: 0
votes: 0
views: 1,886
vote for this lesson:
Synthesizer 101: Basics of Synthesizer Programming
So, you want to make electronic music. Maybe you just want to add an electronic backdrop to your guitar compositions. Maybe you're sick of guitar music, and want to take the plunge into electronica. Or maybe you're just curious how the likes of Jens Johansson or Jordan Rudess programs their synth sounds. Anyway, if you're not already familiar with audio synthesis, you probably have a ton of questions, and I have a ton of answers. I mean really, just look at how long this article is. See the scroll bar off to the right? Haha. Have fun. Let's start with the big one:

Omg what the hell is an article like this doing on Ultimate-GUITAR?

Let's go back a paragraph, to the one that, you know, you just read. (Right?) Every once in a while, a question about electronic music comes up on UG, either because someone really wants to get into it (for some reason), or (more often) because some ignorant scrub ran their mouth off about "techno" or "dubstep" for the millionth time. This is a guitar site, yeah, but anyone who's been to the Pit knows it's not *just* a guitar site. Especially for recording guitarists, it's important to she'd some light on peripherally related music topics, and this is one of them.

How sound works

What your ears hear, and what your brain interprets as sound, is a pattern of what we call traveling waves moving through the air. You know, traveling. Through the air. Your ear picks up patterns of compressed or stretched ("rarified") air, and your brain interprets those patterns to turn you into a blubbering mess when it hears "My Heart Will Go On." Like dis if you cri everytim:'(. In audio electronics, those patterns are encoded as digital values, or sent as analog voltages. Either way, a wave can be plotted on a graph as a function of amplitude versus time, or a waveform. You know what these look like. If you don't, open up your favorite song in Audacity. Then zoom in. Then zoom in. Then keep zooming in. That's a waveform. Then zoom all the way out. That's the whole waveform. It's really the same thing, but zooming in shows a smaller part of it. Physical instruments (like a guitar) make sound by making the air around them vibrate. They mostly do this by making a string vibrate, then transferring that vibration to the whole instrument, and then the air (guitars, violins, and pianos), or passing air through them, and restricting how that air vibrates (trumpets, flutes, didgeridoos and other wind instruments). An electric guitar then takes that vibration and turns it into a signal, which is amplified and turned back into sound with a speaker. Audio synthesis is a bit like that, except we build that signal electronically. Over two hundred years ago, a French guy named Jean Fourier was studying hieroglyphs in Egypt, when he suddenly created a field of science that had absolutely nothing to do with his work. The people paying him to do things that had nothing to do with what we're talking about were not amused. Anyway, Fourier theorized that any waveform could be created by adding together sine waves at multiples of the same frequency. So, say, a sine wave at 100Hz, another at 200, 300, 400, etc, could make any type of waveform (at 100Hz), if you shifted them, and set each one to the right loudness. We call the lowest one the fundamental. Say, you play the fifth fret on the high E string; that's an A4, and the fundamental will be 440Hz. You'll hear (subconsciously) other frequencies, or overtones, at 880, 1320, 1760, 2200, and so on. In synthesis, even though you can't see them in a waveform, overtones are important, since they define the timbre, or character, of a sound. It's what separates a horn from a violin, or a bright piano from a harp. Or an amp's crunch channel from the lead channel. Just remember: a note at a certain frequency is made up of sine waves at that frequency, twice that frequency, three times that frequency, and so on until you need to upgrade your computer because you're making music for bats. It's a niche market, sure, but a totally untapped one. There's probably plenty of money in it.


There are different types of synthesis, but the easiest one to learn is subtractive synthesis. Basically all synths have some capacity for it. Subtractive synthesis works by starting with a really simple waveform, made from a basic geometric shape, and processing it until it sounds like what you want. Back before software synthesis, these waveforms were made with circuits called oscillators, and even though that's not really how it works with software, we still call the starting points of subtractive synthesis "oscillators." To do digital software synthesis, you'll need a synthesizer, along with a host program (probably a DAW, or digital audio workstation, like Cubase, FL Studio, or Logic) to run it in. If you have a DAW already (Audacity doesn't count), it'll probably come with a few different synthesizer plugins out of the box. I usually use Image-Line's Sytrus for synthesis, through FL Studio, but most subtractive synths have all the most important features. For example, most of them can make five basic waveforms with their oscillators:
  • Sine wave: The softest sound possible. No overtones, just a fundamental.
  • Triangle wave: Looks like a triangle on a graph. Slightly heavier than a sine; a good starting point for, say, harp sounds.
  • Sawtooth wave: Grittier than the triangle. Looks like... You can probably guess. A rising ramp that sharply drops back to zero when it gets to the top.
  • Square wave: A pulse that goes from peak amplitude to peak amplitude. Looks like a bunch of squares on a graph. The harshest of the basic waveforms.
  • Noise: Oh... Uh... I lied about the square being the harshest. Most oscillators have an option to just make pure noise, which sounds like radio static. You can do so much stuff with this, like simulating drums and cymbals. Now, the real fun begins.


    A pure sawtooth wave sounds like ass. No really, some of the lower-quality saw-based brass sounds literally sound like a person farting. To fix that, let's get to the part of subtractive synthesis that makes it subtractive synthesis: the filter. A filter works by making all the frequencies above and/or below a certain spot, called the cutoff frequency, quieter. Filters mostly affect the overtones, which, again, are the reason every instrument doesn't just sound like a boring sine-wave "beeeeeeep", so they can affect the sound's timbre dramatically. There are four main different types of filter:
  • High pass: Cuts frequencies below the cutoff. Makes the sound thinner.
  • Low pass: Cuts frequencies above the cutoff. Makes the sound seem distant.
  • Band pass: Cuts frequencies above and below the cutoff.
  • Notch: Cuts frequencies near the cutoff. Fun fact: A low pass filter doesn't act like the air (making a sound seem distant). The air acts like a low pass filter. Air (like anything else sound passes through) responds poorly to higher frequencies, and so they come through muffled as you get further away from the wave's source (like a speaker). That's why when someone drives by, playing trap music out of their car speakers, all you hear is the bass. There's also a resonance knob, which boosts the cutoff frequency, and other frequencies close to it, making them stand out more. You won't notice it that much if the cutoff frequency is constant, but here's the thing:

    You can change controls automatically

    Yeah, that. There are two main ways to make a control (like an oscillator's volume, or a filter's cutoff) change automatically, over time. LFOs, or Light Funky Oscillators, are- oh, Low Frequency Oscillators, are oscillators that oscillate a low frequency. Like, not high enough to make an audible pitch; a few times per second, or less. You can change an LFO's waveform, leading to different types of behavior; a square wave will go from fully on to fully off, and a sine wave will smoothly sweep between the two extremes. For example, you can use a sine LFO hooked up to the filter cutoff to make a simple dubstep wobble. Don't get too cocky, though. The big names use FM synthesis, which is- just don't go there. It's insane. Envelopes are cycles activated when a note is played. They usually have four phases, defined with four values you can enter:
  • Attack: When a note is played, this is the time that the envelope takes to climb to full value.
  • Decay: Once the attack phase is finished, this is the time the envelope takes to go back down, to the "Sustain" value.
  • Sustain: While the note is still being played, this is what the envelope stays at.
  • Release: Once the note is released, this is the time it takes to fully die away. A good synth will let you edit the envelope more closely, or even have it cycle back to the beginning during the sustain phase. An envelope can be used to help a synth model real instruments, by putting one on the volume. Synth strings or winds, for example, will have a noticeable attack and release, while plucked instruments have zero sustain, an instant attack, and a slow decay. How the LFOs and envelopes are routed to different controls depends on the synth you're using, but they should allow some way to hook them up to basic parameters like volume and cutoff.

    Advanced-level $#! +

    A good synth will have a ton of other controls to alter its sound even more. Here are a few:
  • If a synth has more than one oscillator, you can detune them to different frequencies, either in hertz (cycles per second), semitones (1 semitone = 1 fret), or multiples. Detuning a lot (coarse detuning) creates harmonies; detuning a little (fine detuning) produces a flanging effect.
  • Portamento is an automatic slide between two notes. You control how long that slide takes; slower slides are more obvious.
  • An arpeggiator jumps back and forth between different intervals (mostly octaves) while a note is played. Done really fast, this is a big part of 8-bit/retro gaming sounds. The Algorithm and Machinae Supremacy are two acts that love this.
  • Some synths have pitch vibrato built in, which slowly makes the pitch waver. If not, you might be able to change the master pitch with an LFO.
  • Most modern synths have some sort of layering control, like Unison. This tells the synth to make more than one voice at a time, at different frequencies, volumes, and places in stereo, so that it creates a whole ensemble of synths. This is how dance music producers get massive, crushing walls of huge, distorted basses. That and, again, FM synthesis.


    After your synth has done everything it can do, the signal is routed into your DAW. You can put effects on the output to modify the sound even more, but by now, it's already been mixed into a signal. No fancy stuff like messing with the pitch or anything like that. This part is kind of like adding pedals to your guitar/amp's FX loop, and by "kind of," I mean "exactly," this is literally the exact same thing.
  • Distortion: Adds grit by putting more overtones on the sound. Sharper waveform = more overtones = grittier timbre.
  • Reverb and delay: Ambiance and echo effects that can add an illusion of depth.
  • Chorus: Copies and detunes the sound slightly to create a "choir" of synths.
  • Bitcrusher: An odd type of distortion that adds overtones by throwing out digital data, making the numbers that represent the sound less accurate. After a low pass filter with an LFO, it's a crude way of making the "yuh yuh yuh" sound used in some dubstep.


    Okay, enough abstract BS. If you're describing music and sound, there's not much you can do with just words. So here are some examples of synth sounds in rock and metal songs, and how they're made: Europe - "The Final Countdown" (intro):
    Let's start with some cheesy '80s synth horns! Start with two sawtooth waves, detune one of them by 1-1.5Hz, and run them through a low-pass filter. To make it sound more "brass-like", give it an envelope with a quick (but not zero) attack, a bit of decay to about 3/4 volume, and a quick release. Finish up by adding some chorus and reverb, to make it sound like a whole ensemble of trumpets, instead of just one. Between The Buried And Me - "Selkies" (intro):
    That flanging effect comes from a slightly detuned square wave (around 1 Hz), and there's an LFO controlling the panning, or stereo position, to make it sweep back and forth between the channels. Van Halen - "Jump" (intro):
    Wow, really? More detuning! Saw waves, like "The Final Countdown." The main difference is that this one has no envelope, and a way higher cutoff, making it sound harsher. Reverb is important for sharp sounds like this; without it, you'll get a flat, lifeless sound. Shade Empire - "Blood Colours The White" (intro):
    The gritty sound at the beginning comes from a mix of distortion and high-pass filter, probably both in the effects. You can get it to switch on and off really fast by switching it on and off really fast (with a keyboard, or the piano roll in your DAW), or just hold the note, and use a square-wave LFO (to the volume) to do all the work for you. Circles - "The Frontline" (intro):
    This, at the beginning, is what we electronica producers call a hoover. No, seriously, we call it that because it sounds like a vacuum cleaner. It can be made by layering detuned sawtooth waves, maybe adding some chorus, and piling on the distortion for extra harshness. There's also some portamento to give it that almost turntable-scratching effect. The not-quite-a-fade in at the beginning comes from a low pass filter with a lot of resonance; they're changing the cutoff. Children Of Bodom - "Transferrence" (solo):
    Metal keyboard solos have all tended towards the same type of sound, ever since Stratovarius's Jens Johansson gave Children Of Bodom's Janne Wirman a copy of his solo preset. All these sounds are built around slightly detuned (about 1.5 Hz) saw waves, maybe using the layering controls instead of just straight-up detuning for the more recent ones. In Janne's case, it's a bit of each: start with two saw oscillators, add a bandpass filter to get the timbre right, then layer them about four times, and maybe add a chorus effect for good measure. And any guitarist knows that the right amount of delay can only help a lead tone, so maybe add a bit of that. Pendulum - "Under The Waves" (1:45):
    I really like that sort of "water drops" sound they have going at 1:45. You can get that with a triangle wave and a volume envelope with instant attack, slow decay, and zero sustain/release, making it hit, then immediately start dying away, like a plucked string. It sounds like there's a little really fast portamento, too, to give it a slightly "bubbly" sound. Machinae Supremacy - "Sidology Episode I: Sid Evolution" (4:43):
    These guys are famous for using the C64 SID, an old, but flexible, subtractive synthesis microchip, in their music. The break towards the end uses a synth string sound that's likely a detuned saw wave, doubled in octaves, maybe with a little high pass, and a volume envelope with slow attack, decay, and release (well, "slow" still means less than a second), and high sustain. Low-pass it to soften the sharp saw edges. Oh, and the part before it (starting at 4:00)? That "bleep bloop" sound is a really fast arpeggiator, switching back and forth between the same note in two or three different octaves. That's where chiptunes come from.


    You can program basic synth sounds now. Take what you've learned and experiment with it. If you don't know what a knob does on your synth, turn it. The part of sound design that most people miss is trial and error. If you like how something sounds, save it as a preset, so you can come back to it later if you have a song that would sound good with it. That's how electronic music works.
  • Comments
    Only "https" links are allowed for pictures,
    otherwise they won't appear