The 20th-century developments in electronics and computer technology have changed many things about the world, and while it often seems like these technologies are most directly tied to the sciences, the truth is that they've also changed a great deal about the art world. Technology has changed the way many people create and view art, and these technologies have the power to touch every field of art, including painting, writing, sculpture, music, theatre, and film. The most significant changes, however, have been in the fields of visual art and music. A great deal of visual art is now created on a computer screen instead of by hand, but that is a topic for another place. This page is about making music through electronics.
Electronically-generated music is so significant that it has not only generated its own totally separate field of music, but several sub-genres. The field of electronic music comprises such subtypes of music as house, trance, jungle, industrial, and many other even more obscure niches. This is not surprising, considering that an electronic music synthesizer (or "synth", as they are often called for short) is capable of creating sounds that are quite unlike any other instrument known to mankind. With a synth, you can make sounds that are simply not possible to make with a piano, violin, or even a bassoon. Indeed, it was for this very reason that around the 1970s, many popular bands which had formerly considered themselves guitar-driven changed to predominantly synth-based music. (Depeche Mode comes to mind as an example of one such band.) This is not to say that the sound of a synth is better or worse than the sounds of traditional instruments (after all, "better" or "worse" are subjective terms when applied to music), but rather that electronic music deserves recognition as its own separate genre.
Today, many people believe that we're entering a new era of artistry in which people can create their own art with relatively inexpensive home equipment instead of the multi-thousand-dollar studio setups that were required of artists not too long ago. It wasn't that long ago that a person wishing to record professional-sounding music needed a great deal of recording equipment which was well out of the range of most general buyers. Today, however, home computers are common and inexpensive enough that almost everyone can have one. Certainly, anyone who can afford to live inside a building (i.e. someone who is not homeless) can afford a computer. With a computer, you can create world-class music using very inexpensive setups. Of course, if you are going to record instrumental music, you will need to buy instruments, but if you are going to create purely electronic music, the only instrument you need is the computer itself. The only additional requirement is software.
For those who prefer a somewhat more fancy arrangement, you can buy electronic music synthesizers for quite reasonable prices (about what an acoustic guitar would cost). There are so many ways to make music these days that it can sometimes be a bit overwhelming trying to decide on the best technique. This page intends to introduce the reader to the concepts behind electronic generation of music, so that people might hopefully have a better idea of some techniques and options available.
One of the fascinating things about electronic music is how many fields of human knowledge and endeavor it draws from. Obviously, it entails a knowledge of music, but beyond that, creating and using electronic synthesizers and related sound equipment involves considerable aspects of electrical engineering, physics, and math. Using synthesizers to create music is a field where all of these seemingly disparate branches of knowledge come together to produce what is quite possibly the most elegant fusion of art and science currently known to humanity.
But making electronic music is not easy. Using a synthesizer is decidedly different from using a guitar, violin, or piano. Part of this stems from the fact that synthesizers, like most types of electronic equipment, vary widely in layout and features. If you know how to play a guitar, you can pick up pretty much any guitar in the world and start playing it pretty quickly, but different synthesizers tend to be quite different from each other, and the operator should be familiar with the synthesizer before starting to make music. But beyond that, synthesizers are also not as intuitive as traditional instruments. A person can pick up a guitar and, with no musical background or training whatsoever, start to produce some pretty good music on that guitar just by trying different sounds for a while. Synthesizers are more difficult to get the hang of.
A bit of background in audio and music theory is certainly helpful when learning how to use a synth. To that end, this page also intends to be a brief introduction to various different fields of theory which, hopefully, will be helpful to the budding electronic musician.
A sound is actually nothing more than a movement. When you tap your finger on a table, the vibration causes the table to shake a bit, and it also sends vibrations out through the air around the table. These vibrations shake your eardrums when they reach your ears, and the result is what we perceive as sound.
It becomes apparent, then, that sound cannot travel through a vacuum. Because sound is nothing but a series of movement waves propagating through some material, a sound needs a carrier, even if that carrier is simply air. This is why the seminal sci-fi movie Alien used the catch line "In space, no one can hear you scream."
An electric speaker, not surprisingly, creates sound by vibrating. It does this by using an electromagnet; within the speaker is a coil of wire which functions just like any other coil electromagnet, except this coil is called a voice coil because it's used to produce sound. When you send electrical current through an electromagnet, the electromagnet creates a magnetic force. Since the voice coil in a speaker is suspended within a permanent magnet, the magnetic forces exerted by the voice coil cause it to move around. If the electric current is varying rapidly, the voice coil vibrates rapidly, producing sound waves.
It should become apparent by this description, then, that the only way to make sound with a speaker is to use varying electrical voltage. In other words, you cannot simply apply a constant voltage (i.e. DC, direct current) to the speaker and expect to hear anything. You must apply some sort of varying electrical level.
The volume of a sound is perceived by our ears as how loud the sound is. In terms of physics, the volume of a sound is how much energy the sound wave carries. Since a sound wave is just movement of matter, some amount of energy is required to create the sound wave. As the sound wave moves, it carries energy along with it. Bear in mind that the louder a sound is, the more energy the sound wave contains, and therefore, the more energy is required to actually generate the sound wave. (This is an example of the conservation of energy law, familiar to physicists.)
The only real difference between different musical notes is their frequency. A musical note is defined by how many oscillations its sound wave produces per second. Electrical engineers will quickly recognize the concept of "cycles per second" in a wave as being measured in hertz. One hertz equals one cycle per second.
Another frequency-related musical concept is octaves. An octave is a frequency gap that exists between two notes with the same letter. For example, consider a low B note and a high B note; these two sounds are different, yet they seem to be the same note. The high B just sounds like a higher version of the low B. What's interesting is that to raise a note one octave, you double the frequency of the note. For example, imagine that there is a particular note (let's call it Note 1) with a frequency of 100 hertz. To raise this note one octave, you'd have to increase the frequency to 200 hertz; suppose we call this 200-hertz tone Note 2. How would you create the next note if you went up another octave? You'd double Note 2's frequency, so Note 3 would have a frequency of 400 hertz. Notice that to go up one octave, you simply multiply a note's frequency by 2, but to go up two octaves, you actually multiply the note's frequency by 4. Octaves, therefore, are logarithmic. (Logarithms and logarithmic functions will be readily familiar to mathematicians.)
So, what ARE the frequencies for notes? That's a matter of some debate. Many scientists who are not particularly musical take "middle C" (which is often thought to be the most fundamental note in music) to be 256 hertz. This is a convenient value, because it's a power of 2 (it should be readily recognizable to computer types who are used to working with powers of 2). You can therefore have all the C notes be convenient powers of 2: 512, 1024, etc. However, more musically-oriented people usually insist on calibrating notes in terms of the A above middle C. These people insist that the A note in question is 440 hertz. If you go by this standard, then middle C is not 256 hertz, but approximately 262 hertz (actually between 261 and 262 hertz). Continuing this standard, the values for a full scale of notes beginning with middle C (with each value rounded off to the nearest whole number in hertz) is:
C: 262 D: 294 E: 330 F: 349 G: 392 A: 440 B: 495 C: 524
The amplitude of a wave is how far the wave deviates from the neutral point. In a sound wave, the greater the amplitude, the louder the sound.
When you view a wave on an oscilloscope, volume is analogous to the height of the wave. The higher the wave, the more electrical energy is being exerted, and the more force the wave has. In electrical engineering, the height of an electrical wave is often called the wave's amplitude.
The pitch of a sound is relatively easy to understand; it's simply controlled by the frequency of the sound, which is how many cycles the sound wave makes per second. Similarly, the volume of a sound is also pretty easy to understand. But there's a fundamental facet to sound that's much more difficult to analyze and understand intuitively: Quite simply, what the sound actually sounds like. This can be called various names, including the "tone" of the sound (although "tone" is sometimes used as a synonym for "pitch"), or "quality" of the sound, but the most definite technical term is the timbre of the sound.
The timbre of a sound is what makes the difference between different musical instruments. You could play a middle C on a piano, then play it on a guitar, and it would be the exact same note; yet it would sound noticeably different. What makes the difference? Quite simply, the shape of the sound wave makes all the difference. If you could see the sound waves formed by middle C on a piano and compare them with the sound waves made by middle C on a guitar, you'd see that although the two waves have the same basic frequency, the shape of the waves is noticeably different. The shape of the sound wave is what constitutes the timbre of the sound. It's not difficult to understand that the shape of a sound wave creates different sounds, but it is not intuitive to look at a particular sound wave and guess, just by looking at a tracing of the wave, what the sound wave will sound like.
If you really want to analyze sound waveforms, you'll need to understand the concepts behind Fourier analysis. Although serious Fourier analysis is fairly heavy math (it's beyond basic calculus), the basic idea goes something like this: Any waveform which is not a regular sine wave can be produced by imposing several sine waves on top of each other. If you produce a sine wave using a basic electronic oscillator, it's just one fundamental wave at one frequency. If you produce a note of the same frequency with a guitar string, the resulting sound is not a simple sine wave, but actually a more complex waveform. Nonetheless, this more complex wave shape can be reproduced by simply mixing together several sine waves of different frequencies.
If you think about it for a moment, you'll realize how important this idea is to producing real-sounding instruments with a synth. What this basically means is that if you program a synth to mix together the correct combination of sine waves, a few simple electronic oscillators can produce sound waves that sound exactly like any real-world musical instrument. This concept is key to making music with a synth, because it means that you can make music that sounds like it was played with real instruments, even if it was entirely generated using a synth.
In electronics, an oscillator is simply a circuit that generates a repeating electrical wave, like a sine wave, square wave, or sawtooth wave. If you connect the resultant voltage to a speaker, you will hear a tone come out of the speaker.
On a music synth, an oscillator is pretty much the same thing. Typically, the oscillator is where the sound starts, the part that actually generates a sound wave which will laterbe shaped by additional stages of the synth circuitry. An oscillator on a synth usually allows you to vary the pitch, volume, and shape of the wave to make the tone sound different. The oscillator is sometimes called a VCO (Voltage-Controlled Oscillator).
A filter is a processing element that does something with a sound, usually eliminating part of a sound. It might eliminate sounds of a certain pitch or volume while allowing other sounds to pass through, for example. In electrical engineering, a filter is more generally a circuit which blocks certain frequencies, while allowing other frequencies to pass; for example, a high-pass filter allows higher frequencies to pass through, while blocking lower frequencies. Similarly, low-pass filters block high frequencies but allow lower frequencies to pass through. Synths often have conceptually identical high-pass and low-pass filters, except that on a synth, the purpose is to filter sounds, not electrical signals. The filter on a synth is sometimes called a VCF (Voltage-Controlled Filter).
An electronically-generated musical note does not have to have a constant level. It can waver in its pitch or volume, just as real musical notes often do. Within the field of electronic music, there is a special technique often used to generate synthesized notes in which the period of the note is divided into four stages: Attack, Decay, Sustain, and Release. This sequence is sometimes abbreviated ADSR. This sequence describes four different stages of a note's volume.
The first part of a note, the attack portion, is when the note is first played. During the attack period, the note usually quickly increases in volume. The note can turn on instantly (in which case there is no attack period), but you can also make the note "ramp up" to full volume over a short period, creating a somewhat softer and less jarring sound. The period when the note crescendos to full volume is the attack period.
The second part of a note, the decay period, is when the peak of the note starts to fall off. Usually, after the attack period is over, the note has reached its loudest point, after which it decreases in volume slightly. This period of slight decrease is the decay potion of the note's life.
The third portion of an electronic note is the sustain period, and this is usually the longest period of the note's life. This is where the main part of the note exists, after it has been first established by the attack and decay periods. Typically, notes will become slightly softer as the sustain period wears on, but this decrease in volume is usually slight.
The final period of the note's life is the release period, which begins after the note stops. The note can stop instantly (in which case there is no release period), or it can gradually fade off.
Collectively, the existence (or non-existence, as the case may be) and length of these four components of an electronic note comprise what is technically called the envelope of the synthesizer.
Of course, when all the sound-wave generation is complete, you need to boost the electrical power somewhat to get any decent sound level coming out of a speaker. This is what amplifiers are for. Synths often have small amps built into them called VCAs (Voltage-Controlled Amplifiers).
A "patch" is a synthesizer concept that's often compared to an individual musical instrument. Basically, a patch is one particular style of sound; the difference between two patches can make the difference between sounding like a violin and sounding like a guitar. Several things affect what a patch sounds like, including the patch's oscillator settings, filter, amplifier level, and envelope.
Most synths have piano-style keyboards which allow you to play specific notes. Each particular key corresponds to a particular oscillator frequency, and the notes you play can then be processed through various patch settings. Synths with a piano-style keyboard can be rated in terms of how many keys there are on the piano-style keyboard.
A synth can only produce so many different notes or sounds at once. The number of voices a given model can produce is often called its polyphony. (Technically, "polyphony" simply means superimposing of one melody over another, but within the synth world, the word is often used as a spec to note how many simultaneous voices a synth can support.)
Many synths have options for auto-generated special effects which can affect the music in any number of ways. For example, a popular synth effect is echo (which makes a note resound several times after it has stopped playing), or reverb, which is similar to echo but causes a note to linger in the background instead of repeating several times.
An equalizer is a tool which allows you to selectively amplify (make louder) or attenuate (make quieter) certain frequency ranges of sound. A typical equalizer has around 10 sliders on it. Each slider corresponds to a particular frequency band, for example, one slider might be for really low frequencies around 100 hertz, another might be for 500 hertz, another for a thousand hertz (1 kilohertz), etc. Moving the sliders up and down will allow you to emphasize (or deemphasize) certain parts of the audio more than others.
In addition to the oscillator(s) used to generate the actual sound in a synth, most synths today also feature at least one low-frequency oscillator (universally abbreviated as LFO), which serves the purpose of modulating the sound somehow. This is one of those concepts that might be hard to explain with words but which you'll understand instantly once you hear it demonstrated; imagine a note played on an electronic synthesizer that sounds "wavy", meaning that instead of being an absolutely constant waveform, the sound varies slightly over time, perhaps becoming louder, then softer, or more muffled, then more clear. This is a form of oscillation, but unlike the oscillations which produce sound (which, as we've seen, usually oscillate hundreds or thousands of times every second), the oscillation of this sound modulation may take several seconds to complete one cycle. This is a relatively low frequency, produced (appropriately enough) by the low-frequency oscillator. Your ear can perceive the oscillations an LFO produces, whereas you don't notice the oscillation of a note; a tone's wave oscillates too quickly for you to actually realize that it is in fact a wave. An LFO will of course allow you to change its rate of oscillation, but typically they are not used over about 10 hertz, since 20 hertz is the lower range of human hearing and if you set the LFO too high, it will have the effect of actually adding another tone to the sound. Generally, LFOs are used at considerably less than 1 hertz, meaning it takes more than a second to complete a cycle. For example, at 0.2 hertz, the LFO completes one oscillation cycle in 5 seconds. Besides a frequency control, LFOs must allow you to specify what the LFO is affecting: It can affect the pitch of the sound, the filtering characteristic of the low-pass or high-pass filters, the volume, or some other effects.
A synth needs to plug into something. Make sure that whatever synth you get has the right connection types for whatever you're going to connect it to.
No matter how much experience you have, art is always a process of exploration. When you're creating art, you're discovering new ways to arrange images, sounds, and ideas; when you perform art, you're discovering new ways to present those ideas; and when you observe or listen to art, you're discovering new ways to interpret and think about those ideas and how they relate to the world. For the electronic musician, there are countless software tools that you can use to try new and different things. For the beginner, here are listed the most major and important software programs that pertain to electronic music production, but there are many others that you'll doubtless discover and find useful.
A program which actually models a real synth, with a whole bunch of knobs and sliders for you to fiddle with and make different instrument sounds with. Possibly too complicated for beginners, but still a lot of fun to play with, and you can get some really amazing sounds out of it. Available for Windows and Linux. Homepage at zynaddsubfx.sourceforge.net.
amSynth claims to be a synthesizer in the style of classic real-world synths like the old Moogs that pioneered the industry. It seems like a fairly basic program, but it could be worth a look. Currently a Linux-only program. Homepage at amsynthe.sourceforge.net.
Another simulated synth, this one gives you all the knobs on one screen (instead of having several separate windows like ZynAddSubFX), yet it still packs a lot into that one window. I have yet to figure out how to get it to play any notes, though. Available from the homepage at oxesoft.com.
A great, great open-source audio editing program. Available for Windows and *nix. Homepage at audacity.sourceforge.net.
An excellent MIDI sequencer, similar to the commercial Cubase program for Windows and MacOS. Open-source and available for Linux. Homepage at www.rosegardenmusic.com.
Another music sequencer, similar to Rosegarden. Available for Linux. Homepage at brahms.sourceforge.net.
Another good free music sequencer, this one available for Windows. Homepage at anvilstudio.com.
A terrific open-source MIDI sequencer for Windows and Linux. Homepage at jazzware.com.
Music tracker for Unix/Linux, similar to DOS' FastTracker and the legendary Amiga program ProTracker. Homepage at www.soundtracker.org.