On the origin of “Sampler” and other words


AKSampler is an AudioKit node which implements a polyphonic sampler instrument, i.e., a simple type of synthesizer whose oscillators play samples from sound files.

“Sampler” is a slightly misleading word, because it doesn’t mean what it once meant. The first samplers could record sounds from a microphone or line input, then play them back in a pitch-shifted fashion from a keyboard. Musicians and producers quickly found this to be little more than a gimmick, and that preparing sound files for playback was a highly demanding, specialized process. Today, most “samplers” (including AKSampler) are playback-only, and a substantial industry of sample-creation has arisen, using the internet for distribution.

The whole of digital audio is based on the principle of sampling, which is the notion that sound–continuous variation of air pressure over time–can be represented as a series of discrete measurements (“samples”) of sound pressure. Provided these samples are captured sufficiently often, the gaps between them are so small as to be inaudible. Capturing and recording digital samples came to be called “sampling”, so a device for doing this was naturally called a “sampler”.

Of course, we can’t measure and digitize sound pressure directly, so instead we use a microphone, which transforms time-varying air pressure near the diaphragm to a time-varying voltage, with sufficient fidelity (accuracy), that an amplifier and loudspeaker can perform the reverse transformation, reproducing the original sound. The signal voltage is said to be an analog of sound pressure, and we refer to all such electronic systems–where voltage varies continuously, not by steps–as “analog”.

Microphones and loudspeakers, which convert between sound pressure and voltage as its electrical analog, are two types of transducers. The standard definition of transducer, “a device that converts one type of energy to another”, is a bit unsatisfying because it doesn’t mention the all-important notion of analogy. A boiler, turbine and generator convert one form of energy (heat) to another (electricity), but these are not transducers. The “magic” of transducers lies in the ability to convert sound/vibration to electrical form and then back again.

This analog “magic” powered the entire history of electronics in music up until the 1980’s. Different types of transducers used instead of microphones–primarily magnetic and piezoelectric “pickups”–gave rise to many new “electric” (now called electroacoustic) instruments such as the electric guitar and various piano-like electric keyboards. Attaching transducers to springs, plates and moving magnetic tapes gave us the first echo and reverberation effects. Replacing input transducers with various kinds of electronic signal-generator circuits led to the first analog synthesizers. (Earlier than you might think–check out the Hammond Novachord, introduced in 1939!) The poor fidelity of early vacuum-tube based amplifiers (especially at high volume) introduced musicians to harmonic distortion, which drove hi-fi engineers and teenage music fans crazy (for opposite reasons).

OK, so back to sampling. An electronic circuit called an analog-to-digital converter (ADC) converts an analog voltage to a binary number which can be saved in a computer’s memory. A corresponding digital-to-analog converter (DAC) circuit performs the reverse conversion. Because ADCs don’t work instantaneously, a so-called sample-and-hold (S/H) circuit was developed to “freeze” the voltage at the input long enough to create a stable binary value. (S/H circuits also came to be used in synthesizers, usually to create the “bleeping computer” sound used in countless older movies.)

To digitize a time-varying voltage (such as an analog audio signal) well enough so that the DAC will reproduce it without distortion, it’s necessary to sample at a rate, or frequency, which is twice as high as the highest frequency present in the input signal. This is called the Nyquist rate, and for audio it’s about 40,000 times per second (40 kHz). Prior to the late 1970s, it wasn’t practical or economical to make ADC’s which could work that fast, and computer memories were too small to hold any amount of audio digitized at such a high rate, but once these technical barriers were overcome, the first Compact Disc audio systems appeared in 1982, using a sampling rate of 44,100 Hz, which remains the most commonly-used rate for digital audio, 36 years later.

Sampling analog signals at less than the Nyquist rate leads to a type of distortion called aliasing, where frequencies higher than half the Nyquist rate are misinterpreted as lower frequencies. Because these “aliased” lower frequencies are not harmonically related to the original frequency, they can be quite jarring to the ear.

Aliasing is not just an ADC-related phenomenon; it can also arise in sample playback. A chunk of digital audio sampled at, say, 44.1 kHz is by definition band-limited to contain no frequencies higher than 22.05 kHz, but if we want to play it back an octave higher, we do so by sending every second sample to the DAC, effectively re-sampling it at only 22.05 kHz. Any frequencies greater than 11.025 kHz (the Nyquist limit for this reduced sampling frequency) will then be aliased to nasty-sounding lower frequencies in the DAC output. The problem becomes worse as we go up two or more octaves. How to get around it will be the subject of another blog post.

This has been a quick romp through some of the common vocabulary of the audio world. I hope you find it useful.

Authorized Fairlight CMI photo, courtesy of Peter Wielk, horizontalproductions.com.au

Related Posts

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.