View all podcast episodes

Sharing Emotion Through Technology

December 3, 2021

This month’s podcast episode explores the inner workings of what some of us would say is the heart of the Olivier Music Barn: the recording studio, which sits high above the audience at the concert hall’s stern. From there, our audio engineers attend every performance, attuned to every note as they work to capture the music that is made beneath them. In the process, they transform beautiful performances into similarly beautiful high-resolution recordings, which are available to everyone, everywhere to stream and to download for free.

Music Examples:
Claude Debussy L’isle joyeuse performed by Pedja Muzijevic
W.A. Mozart arr. Carl Czerny Piano Concerto No. 20 in D-Minor Allegro Assai performed by Anne-Marie McDermott, Emma Resmini, Xavier Foley, Aaron Boyd, and the Calidore String Quartet
Frédéric Chopin Nocturne No. 8 performed by Ingrid Fliter
Heatly Brothers Happy Upbeat Game Music
Sofia Gubaidulina Rejoice! I. Your Joy No Man Can Taketh from You performed by Vadim Gluzman and Johannes Moser
Bela Bártok 44 Duos for Two Violins, Sz. 98 No. 22: Mosquito Dance performed by Jennifer Frautschi and Ben Beilman
Franz Liszt Transcendental Étude No. 5 “Feux Follets” performed by Jenny Chen

Sound Effects:
“Vinyl Record, On-Off, A.wav” by InspectorJ (www.jshaw.co.uk) of Freesound.org

Engineerd by Jim Ruberto
Produced and narrated by Zachary Patten
All photos by Erik Petersen, including the image from within the larch-lined walls of the Olivier Music Barn at Tippet Rise, above


ZACHARY: The evolution of music doesn’t happen by accident. To move from one era to the next is to find common values, use them as stepping stones, and organically develop them into something new. Claude Debussy helped music evolve into the 20th century. In “The Joyous Island,” performed by Pedja Muzijevic, Debussy guides the listener from the well-known diatonic scale to the more contemporary whole-tone scale. The diatonic scale contains a collection of intervals that give a familiar sense of tension and resolution, but the whole tone scale is ambiguous, unresolved, and, at the time, more unusual to a public audience. Musicologist Jim Samson says, “Debussy achieves this transition by employing the Lydian mode.” It’s not by chance this ancient Greek scale has a pivotal common tone, a link between the diatonic and the whole tone. Debussy knew this and it was an inspired solution.

Pedja Muzijevic performs in the Olivier Music Barn at Tippet Rise

Similarly predicated on stepping stones is the evolution of music recordings; gradual technological advances with the occasional new beginning arising from a fundamental breakthrough. Classical music engineers have always been at the forefront of audio technology, using the latest tools to capture the warmth of an instrument’s tone, the brilliance of a concert hall’s acoustics, and the depth of a composer’s emotion.

At Tippet Rise, there’s a recording studio where we hope to continue to transform music performances into high-resolution audio recordings. The mission to capture great instruments in a great space is timeless, but the recording techniques and technology are new and state of the art. High-resolution recordings from the summer concerts and sessions are freely available on our website for your enjoyment. We’d like to talk about how they’re made, how technology changes the way we share music, how it changes us and gives us more immersive ways of experiencing this music, in this episode of the Tippet Rise Podcast.

The tools and techniques we use to record music continue to evolve. To listen to recorded music in the 1920s-50s, meant selecting and placing a vinyl record onto a turntable and carefully guiding the stylus into the groove. The stylus moves through the vinyl’s serrated-like cuts and it vibrates. These vibrations cause the attached magnet to move through a wound wire coil, and whenever a wire coil is subjected to a change in a magnetic field, it produces a voltage, an electrical charge.

The physical phenomenon at the core of analog audio is the attraction and repulsion features of an electrical charge or its magnetism. In the 1960s and 70s, these principles evolved from vinyl to cassette tapes.

Cassettes store sound on magnetic tape. The tape material is a type of plastic, originally coated in ferric oxide. The coating is important because it’s ferromagnetic, which means that when it’s placed in a magnetic field, the small magnetic particles on the tape become moveable. As the tape moves through the cassette head, electrical signals organize the particles and imprint patterns onto the tape. And when someone plays a cassette tape, the tape’s patterns generate a small signal in the cassette head’s coil. The current gets amplified, sent to speakers, and becomes sound waves in the air.

These are two examples of analog recording, essential to how we captured and shared music, but there was a big problem.

MONTE NICKLES: Analog recording is noisy from the get-go. And it was just not well suited for classical music at all, which is extremely dynamic music; it gets very quiet and it gets extremely loud.

ZACHARY: That’s Monte Nickles, the Audio Engineer, Video and Technical Systems Manager at Tippet Rise.

MONTE: Noise was the enemy in early recordings. That’s why if you listen to anything, even in the 70s, stuff was still noisy, they were just really good at hiding it and getting rid of it. But, you know, anything from the 60s and 50s, and good lord, everything earlier than that is just noisy.

ZACHARY: Most of the music recorded at Tippet Rise comes from the lineage of Western Classical Music. Historically, this music was presented in reverberant spaces, by musicians playing an array of instruments from winds, brass, to percussion, and strings. Each, a novel invention and a paradigm of physics.

In the early 1900s, symphonies and operas were performed in exquisite spaces, but if you wanted to hear this music, you had to be there in person. For most, those experiences were not accessible. Due to advances in recording technology coupled with ambition to translate these performances through recordings, orchestras in Philadelphia and Boston worked with engineers from the Victor Talking Machine Company to make some of the first ever recordings. You’re probably familiar with Victor’s logo, a fox-terrier staring into the bell of a gramophone. But you’re probably even more familiar with the noise of that recording era.

MONTE: There was no way around it. You just had to live with it, that was just how it was. Noise was always there but it drove people nuts. They wanted to get rid of the noise and they wanted beautiful stuff. They wanted to hear what they were hearing in the hall but you couldn’t ever actually do that because of the noise of devices.

ZACHARY: Imagine the challenge of recording this music that had been refined for fifty, one hundred, or more years by musicians who had performed it for decades, not to mention the sheer size of an orchestra, to capture that with new and fledgling technology is, for some, exciting, but many didn’t see the need to record music when you could just hear it.

Music was immensely popular in the home at that time. Printed sheet music, like this Chopin Nocturne No. 8, performed by Ingrid Fliter, sold millions of copies, and many companies produced home pianos of all shapes and sizes. To think about a family bonding through listening and playing chamber music together seems idyllic today, and maybe recording technology, even though it makes music more accessible, had something to do with that change in family custom. But, things do change. How technology changes and how it changes our lives is always a compromise.

MONTE: You know these analog recording systems are extremely limited in certain ways. They’re extremely good, but they are, essentially, a device that is nothing but compromise. They spent seventy-plus years on that, getting all the compromises between tape, bias, and head-alignment to get the noise of tape down as low as possible; all these noise suppression systems that came out for tape machines and various types of add-ons to tape machines as well. Analog electronics have lots of restrictions compared to what you can manipulate in a digital realm. But when we get to that point of digital audio being a real thing that could actually happen, definitely classical recording engineers jumped on that really quickly because they love the quiet.

ZACHARY: Those seventy-plus years were filled with small, but steady improvements, to help audio engineers translate more signal and less noise. But, in the 1990s there was a fundamental shift from analog to digital. In the way that magnetism is at the core of analog audio, digital audio is concerned with resolution, or the measurement of detail in a copy of something, in this example, a sine wave.

MONTE: The easiest way to talk about it is a pure sine wave. You take a one-kilohertz tone, which means it’s one thousand cycles per second. So, if you start at zero and you go all the way up to the amplitude, we’ll call it ten - we’ll say zero to ten, and then back to zero. Then, down to negative ten, and then back up to zero, that’s one cycle. So if you do one cycle per second, that’s one hertz. If you’re doing one thousand cycles per second, that’s a one-kilohertz pitch or tone.

Sample rate is a measurement of a continuous signal, audio, in this case, many times a second. So, a CD is at 44,100 samples per second. A 1K tone is only happening one thousand cycles per second or a thousand times up and down. If you cut that up into 44,100 different little pieces, just sliced it up and put it all together, you would come out with a perfect picture. Now, it’s dangerous to talk about and this has been tricky because a lot of people talk about sample rate and they’ll say it’s cutting up the sound like little dices. But, that’s not necessarily true. What it’s doing is taking a measurement, and at this point, this amplitude is happening at this moment in time. And then the next sample is at this point, this amplitude is happening in the signal, and it keeps going like that. So, if you think about these little dots on a graph. If you graphed that out, every little measurement you took in that 44,100 samples, you would basically end up with a perfect 1K tone.

ZACHARY: Whereas analog transforms sound waves into electronic signals which are imprinted on vinyl or tape, digital receives the electronic signal from a microphone, a converter transforms the signal into ones and zeroes, and a copy is made, all via the sampling method Monte just described. If samples are the number of measurements taken per second, we also need to know the accuracy of the measurement? So if quantity is the Sample Rate, quality is the Bit Depth.

MONTE: Bit Depth is associated with the sample. When you take a measurement, you’re associating amplitude with that sample. The Bit Depth determines the amplitude values that you can record for each sample. Simple Bit Depth stuff is 8 bits. Everybody knows what 8-bit audio sounds like, all you have to think about is super classic video games.

Those kinds of sounds are all 8-bit audio. CDs use a bit depth of 16, which means you have 16 bits of amplitude, which at roughly 6dB per bit you get around 96dB of headroom. Which allows you to assign 65,536 values of amplitude to one sample.

Sunset’s rose and golden glow over the Absaroka-Beartooth Wilderness south of the art center

ZACHARY: One way to think about this is to imagine a color and want to make an exact copy of this color. 8 bits is 2 to the 8th power, which gives us 256 options to match the color. Like Monte said, a CD is 2 to the 16th power or 65,536 values. The more values, the closer we get to matching our copy more accurately. And, bit resolution increases exponentially, so the power of exponents quickly yields a much higher resolution.

MONTE: 24 bit is what is considered high resolution. So, DVDs for instance, and Blu-rays, they’re all 24-bit - 48K sample rate. 24-bit does have 144 dB of amplitude signal to noise. That’s a huge amount of amplitude range. A jet engine is like SPL close to 144db, so it’s a lot of headroom and its 16,777,216 values.

ZACHARY: Having that wide of a decibel range means that we can record, undistorted, some of the loudest sounds on earth, all the way to the quietest discernible sounds. We can also limit any introduced noise, and we have almost 17 million possibilities to more closely match the amplitude. The higher the sample rate and bit depth, the more accurate our copy and the higher the resolution, but the term “high-resolution” is a moving target

MONTE: When we went from analog to digital, I think most people would’ve called that version of digital, whatever it was, high resolution because it was the highest digital thing that you could do. The more samples you have, the more you can do things to it in post. So, when we’re denoising somebody who coughed in the middle of a concert or dropped their program book, the more samples you have, the better you can actually take that noise out, opposed to something at a lower resolution. That’s kind of a byproduct. The main reason that we’re trying to do that during the recording process is, essentially, we’re trying to capture as much detail and, you could even think about it as a search for capturing that extra bit of human emotion.

ZACHARY: Sharing emotion is at the heart of all of this, it’s perhaps the reason to make the instruments, invent the recording equipment, and build the acoustic spaces.

The voice was one of the first ways composers shared emotion. As time progressed, we made tools that could replicate the emotional resonance of the voice, but could also go beyond its reaches.

Vadim Gluzman and Johannes Moser perform beneath the Domo at Tippet Rise

In the 16th century northern Italy, a few families of luthiers had been crafting instruments for generations. Like today’s recording equipment, the violin evolved as a tool to capture the inner reaches of emotion, and convert it into sound. The violin channels that emotional resonance in the music of Sofia Gubaidulina.

“Rejoice!,” performed by Vadim Gluzman and Johannes Moser, “emanates from the inner reaches of the far-out.” Sofia says “It’s a metaphor for the transition into an “other” reality through the juxtaposition of normal sound and harmonics. The possibility for string instruments to derive pitches of various heights at once and at the same place on the string can be experienced as the transition to another plane of existence, and that is Joy.”

So, whether it’s pressing down or lightly touching the string, the way a musician alters their touch changes the physical and metaphorical essence of the note. If the intention of our touch changes its meaning, how does that extrapolate to our other senses? And if we use our senses to understand the world, how does our intention change our worldview?

This kind of transfigurative thought is imbued in Sophia’s music and you can certainly enjoy her music without knowing this, but her program note invites us deeper into her world. It illuminates the piece, the way she thinks about music, and advocates for the deeper bonds between us.

Performers and audio engineers strive to translate this emotional resonance and musical metaphor. But, to produce an experience where we can really listen and think about its meaning; to make a recording capable of casting a spell over you is a challenge. And the audio engineer’s mission is to make an extremely detailed reproduction of an analog experience.

MONTE: (We are) very much trying to reproduce an analog experience. If you look at a sound wave and the way it’s just literally the particles in the air fluctuating at a specific frequency, that vibrates your eardrum, and then your brain interprets that into an electric pulse, that’s how you hear sound. That’s basically what we’re doing with a microphone. The microphone picks up the vibrations in the air and then it gets amplified by a mic-preamp, and then a computer converts that into ones and zeroes.

ZACHARY: While the digital ones and zeros don’t themselves contain emotion, there is a relationship between their quantity, quality, and how precisely they replicate the performance.

MONTE: From a technical standpoint, digital is far superior for multiple reasons, but one of the main reasons is that it is an exact replication of whatever you’re putting into it, that’s what you’re getting out. You’re not getting any coloration from the kind of tape you’re using or the preamp cards in the tape machine, or the output cards of the tape machine, nothing. It’s just what’s going into the computer, it’s getting translated into digital information that is exactly what you put in.

ZACHARY: This brings up a point of deviation in terms of recording aesthetics, goals, and how the tools are used.

MONTE: Being very broad here, but there are two schools of recording in this regard. One, is you want this colored sound. That’s why you pick a tube microphone and it goes into a Neve preamp, and you’re using a tube compressor to lightly compress it, adding all this harmonic distortion going through it - and then you’re recording that. You know, that’s not how that person really sounds. You’re enhancing how that person sounds in a very specific way. That’s opposed to using a transformerless microphone, which has very low distortion, into a transformerless mic-preamp, which is extremely clean, straight into a converter. That’s a really clean recording setup because you’re trying to actually capture what is happening on the other side of that microphone exactly as it sounds. You either go extremely honest, where you’re trying to capture what it is, or you’re trying to capture something and make it sound even better through different means.

ZACHARY: With analog, each piece of gear imparts a certain character on the signal. How an engineer orders the gear and blends their characteristics is part of the limitless potential of analog recording. The way an engineer keeps track of the signal as it flows through the gear is called signal flow. And, once an engineer establishes the order of each piece of gear, that’s called the signal chain.

MONTE: If you think about what a signal chain was in the 70s, or even the 60s and 50s, you had a tube microphone or a ribbon microphone going into a tube preamp, probably, or a class A or class AB style mic preamp. Which would then, probably, go through a giant analog console which had a bunch more transformers, op-amps, and circuitry in it. And then, it would go to a tape machine, which itself had its own analog circuitry, and then go only tape which was an analog medium. Then, you would play it back through all of this analog gear again. So, you had this huge chain of basically analog voltage being transferred between all of these different devices, and every little thing changes the sound,

ZACHARY: Over decades, the character of analog recording equipment had become just as much a part of the music as the instruments themselves. When the earliest digital systems came out, they were perhaps not the kind of advancement hoped for, the magic of analog circuitry had been removed, which made the sound a little too honest.

MONTE: It was so honest that people didn’t like it because we became so accustomed to this “warmth” as people like to describe analog gear, compared to digital which was suddenly raw and true. Because converters, the device that takes the analog signal from a microphone and a microphone preamplifier and converts it into literal ones and zeroes, into digital, those units have a sound. And, early converters didn’t really sound that great. Because, again, real-world electronics have limitations.

ZACHARY: Necessity is the mother of invention. To think about how many people across the globe were inventing new solutions for more immersive audio experiences is staggering, and this is happening all the time, especially right now. It shouldn’t be taken for granted, though, how recording technology changes, the people who create change and to remember it changes us as well.

MONTE: With time, people adjust, the equipment gets better, and the converters got way better. It’s almost taken for granted at this point and people just expect it to work, and do its job really well. It’s kind of amazing when you think about it. You build this recording chain and then you know that the device that is recording it, which is digital, is capturing that recording chain’s sound that you’re hoping sounds great.

ZACHARY: For years the trend had been to improve the quality of recorded audio, and get closer to that in-person transcendent experience. And then, there was a pivotal public shift of values, ignited by one question: what if one thousand songs, your entire music library, could fit in your pocket. The answer to this question not only changed how much music we could listen to, but it also changed how we listened.

MONTE: And we kind of went backward there when mp3s came out. Those work on basically the principle of masking, where it looks for common frequencies that overlap and it extracts the things that you can’t hear and keeps the loudest thing that you can hear. And that’s why mp3 files are so small because it’s literally removing a lot of data, which people love because small files are great because back in the day storage was really expensive.

ZACHARY: We know that digital makes a copy of the analog signal, turning it into ones and zeroes, but those copies now need to be stored somewhere, and so the issue becomes how and where to store all that information.

MONTE: Storage was still a huge thing in early digital systems. To have a hard drive that could keep up with multiple tracks of digital, back in the day, was insanely expensive. Again, technology advanced, hard drives got cheaper and more reliable. The cheaper storage gets, the more it makes sense to have larger file types. People started doing things in high resolution more often. With a higher resolution recording, you basically expand the frequency range that you are capturing.

ZACHARY: In other episodes, we’ve talked about overtones, the vertical stack of frequencies that are part of any sound or note. Some overtones we can hear, like these showcased by Jennifer Frautschi and Ben Beilmann in Bartok’s Violin Duo.

Ben Beilman and Jennifer Frautschi

The almost metallic sound comes from attaching a mute to the strings which decrease the amplitude of the fundamental and reveals those higher overtones. But there are many, even higher, overtones our ears can’t hear. Remember, a CD is 44,100 samples per second. It’s not by chance that 44.1 is directly linked to human hearing. This relationship was quantified in the mid-1900s and is called the Nyquist Theorem.

MONTE: The Nyquist theorem states that in order to hear whatever frequency you want, you have to capture twice as high, which is how you ended up with 44.1kHz. The human ear hears between 20Hz and 20kHz. If you look at a 96kHz sample rate, that means that you should be able to capture and hear frequencies up to 48kHz.

ZACHARY: We know we can hear up to 20kHz, and yet Monte just said a 96k sample rate can capture frequencies up to 48kHz, 28,000 hertz higher than our absolute maximum. And there are many overtones above that. A good question right now is what is the point of capturing a sound that high?

MONTE: Now, a lot of people will argue there’s no point because people can’t hear 20 kHz anyway, so why does affecting those upper frequencies really matter with the filtering? But, that’s one of our arguments is that, yes, you can’t necessarily hear directly 20kHz, or even 40kHz, but you can tell something is different.

We’re recording at 384kHz, which means the captured sound is all the way up to 192kHz. This is insanely high and no human being on planet earth can hear that high, but that’s not why we’re doing it, it’s about capturing something else up there because you can experience that or feel that. That’s really why we’re trying to do that.

ZACHARY: All tools have limitations, and yet, they enhance our senses and allow us to understand our environment more profoundly. At every moment, all around us, are frequencies we will never know are there and will never consciously experience. The sounds we capture up to 192kHz is the ultrasound range that a bat experiences and a moth eludes the bat because it can sense 100kHz higher than that. A pigeon can sense one half of one hertz, which alerts it to thunderstorms far off in the distance. Even though we can’t hear them, these infra and ultrasound frequencies help to define our environment.

Hearing is a gift, but consider how much we miss because of our own limitations. Music and recording music isn’t just about the notes, it’s also about recording the unheard overtones within the notes and their interaction with the natural environment. This broader theme carries across all parts of Tippet Rise, whether it’s our impact on the ecology, a sculpture’s placement on the land, or recording audio, to be consciously aware of our place in and how we interact with our environment is the point. Whether it’s our environment or a recording, we are part of an ensemble greater than one, or as Monte says, there’s no such thing as a soloist.

MONTE: Because you’ve got whatever you’re recording plus the room. Those are the two big characters to me all the time. And then you peel back that next layer which is artist, instrument, space, microphone. It’s kind of like the holy grail of recording. You need a great artist, you need a great instrument, you need a good space, and then you need a good device to capture your three main characters as best as possible.

ZACHARY: We spoke about how analog gear changes the sound and also adds aesthetic character in some music, but with this music, the artist, instrument, and space are the three primary colors from which an entire color palette is created. By recording it in this transparent way, we want to make this music accessible while at the same time not imparting anything that obscures the intention.

MONTE: We want our signal chain to be as invisible as possible so that the performer, the instrument, the room, and the interaction between those three things are the sound and the color that we are capturing. Not my microphone, not my converters, nothing in between them. We’re trying to be as pure as possible to what the interaction between those three characters is.

ZACHARY: How incredible is it that the synergy between those three characters becomes the transcendent Joy in Gubaidulina’s Rejoice or the bliss in Debussy’s Joyous Island. High-resolution audio offers a more immersive replication of this synergy, and you can still fit it in your pocket, listen to it anywhere, and have your own unique experience.

The progression of emotion takes on many forms, from a composer’s thought, a performer’s touch, a hall’s resonance, and an engineer’s signal. The way we record and share these emotions continues to improve. But, even though digital sampling is happening 384,000 times per second, and even though 32-bit rater offers over 4 billion amplitude values for each sample. Even though the technology has come this far, recreating the perfect resolution may always elude us because, at the end of the day, digital is quantizing nature’s infinity.

MONTE: It’s amazing what we can do with technology to try to recreate music and sound, But, there is no “perfect” in electronics. The gear is there but it’s still about trying to capture the human essence of the recording. If you are in the Olivier Concert Hall and you are standing twenty feet away from one of the grand pianos that somebody is playing; then, you go up in the studio, even with immersive playback, it’s still a very different experience because microphones are not our ears.

ZACHARY: Engineers have imperfect and manufactured electronics to chase what very well may be an unattainable goal. But, as history has shown, the tools and technology will surely continue to evolve.

Pianist Jenny Chen