Keyboard programming is definitely on the top ten list of the world’s most misunderstood trades. Most theatergoers have no clue it even exists, yet it’s one of the most important aspects of modern musical theatre productions.

Over the years, keyboard programming has evolved from primitive hardware synthesizers to complex software counterparts with infinite routing capabilities. Recently, Apple MainStage has become the most popular software solution for keyboard programming.

In this series, we’ll be taking an in-depth look at all the possibilities that MainStage offers, and how to integrate these concepts into your own keyboard programming.

MainStage vs. Logic

Before MainStage was available as an affordable standalone app, it could only be obtained by purchasing Logic Studio. That package was quite expensive and simply out of the price range of many hobbyist musicians. A few years ago, Apple released MainStage as a standalone app. Since then, it’s become the leading piece of software for musical theatre keyboard programming.

It’s important to distinguish the difference between MainStage and Logic. While they do have some overlapping features, Logic is a DAW (digital audio workstation), and it’s intended use is for music production in a controlled studio setting. With that said, Logic has seen occasional use in live productions, most notably at Matilda the Musical.

MainStage’s intended use is for live performance, and happens to have an almost perfect feature set for musical theatre. Multiple sounds can be combined together to create patches. Multiple patches can be chained in order and put into sets. A footswitch can be assigned to change patches, and a pedal can be mapped to control volume. These are all the basic programming necessities for musical theatre.

Installing MainStage

Apple MainStage can be purchased on the Mac App Store for $29. The app will automatically download into your Applications folder. When MainStage launches for the first time, some required content will start downloading. I recommend not doing anything else on the computer during this download process. When the download process is finished, MainStage will prompt you for your password to complete the installation of the required content.

The next step is to download the additional content. Apple provides a great selection of sounds to all MainStage and Logic Pro users for free! To download the additional content, click on MainStage 3 in the menu bar and select Download Additional Content. I recommend downloading everything except the **Multi-Output Drum Kits andProducer Patches under Drum Kit. If you don’t have enough space for all the content, feel free to pick and choose what you want. You can always download more in the future if necessary.

Audio vs. MIDI

Before we dive into learning how to program a show, let’s briefly discuss the differences between audio and MIDI. Audio is tangible in the sense that we can hear it, and it can be classified into two major categories — analog and digital. MIDI, on the other hand, is an instructional dataset that lets different pieces of hardware and software talk to each other. In other words, MIDI is not something that we can hear.

Analog Audio

Analog audio is a near perfect representation of an original sound source. An example of analog audio would be capturing sound straight to tape, without the need of processing data through a computer.

In this example, changes of air pressure generated by a sound source hits the diaphragm of a microphone. The microphone acts as a transducer and converts the instantaneous air pressure changes into proportional voltage or current. The changes in voltage are then recorded onto tape by a tape machine. Since this process is performed completely in the analog domain, the resulting waveform is perfectly continuous. This means if you pick two random points along the waveform, there are an infinite number of data points that can be extracted.

Digital Audio

A digital waveform, on the other hand, is discrete. This means there are a set number of data points, and is only a representation of the original analog waveform. Take the same scenario as above, but replace the tape machine with an audio interface hooked up to a computer.

Once the signal reaches the audio interface, it must be converted into digital data (0’s and 1’s) with a ADC (analog to digital converter). This chip lives inside the audio interface, and its sole purpose is to translate analog data into something the computer can understand and process. It does so by taking “samples” of the waveform at a constant rate called the “sample rate,”

Sample rate is measured in hertz, and it defines the number of times per second a sample is taken to reconstruct the original waveform. We live in a digital world, and its possible you’ve seen values like 44.1 kHz or 48 kHz. Taking the Hz to KHz conversion factor into account, it’s all of a sudden obvious what these numbers mean. 44.1 kHz is equal to 44100 Hz. This simply means in a digital environment operating at 44.1 kHz, 44100 samples are taken every second to reconstruct a waveform.

So now it’s clear why digital doesn’t have infinite resolution. In the world of high resolution audio, 192 kHz is considered the standard for extreme audiophiles. 192,000 is nowhere close to infinite, but this doesn’t really matter. 44.1 kHz, which is the sample rate of all CDs, is considered to be the threshold for humans. Many people cannot notice improvements at higher sample rates, and 44.1 kHz serves as a good compromise between computer resource usage and sound quality in a live performance environment.

So now you have a good understanding of digital audio, and you’re probably wondering how all of this information on audio recording relates to keyboard programming. Well, it does and it doesn’t. More often than not, keyboard programming will not make use of an ADC chip because the sound source is a software instrument inside the computer. Unless you need to process an sound source external to the computer, the audio interface only serves as an output device for you to send an analog signal somewhere else.

Coincidentally, there is a chip in the audio interface that does the exact opposite of an ADC, and it’s called a DAC (digital to analog converter). In our situation, a DAC simply takes the 0’s and 1’s generated by software and converts it into voltage that eventually finds its way to speakers or headphones after proper amplification.

Simply put, analog has virtually infinite resolution while digital’s resolution is bounded by concepts like necessity and computer resources.

MIDI

MIDI stands for “musical instrument digital interface,” and is a whole separate concept. It’s a protocol or language that allows different pieces of equipment to talk to each other. Remember when we said analog has infinite resolution and digital audio’s resolution can be determined by the user? MIDI also has resolution, and it is comprised of 128 discrete values from 0 to 127. There are no fractions in MIDI, just 128 whole numbers.

In the world of keyboard programming, MIDI is the language that the keyboard uses to communicate with a computer. What MIDI isn’t is audible data. You can’t hear MIDI, but you can certainly hear a sound generated by an instrument after receiving instructions via MIDI. The important concept to take away here is you should never say “my MIDI sounds wrong” because MIDI doesn’t sound like anything.

Monitoring MIDI data in MainStage is really easy. Just look at the little box centered at the top of the screen when playing a note or moving a pedal.

This post is part of the Apple MainStage Keyboard Programming series. The next post in this series is Choosing Hardware for Apple MainStage.