Empowering the Next Generation of Women in Audio

Join Us

The Psychoacoustics of Modulation

Modulation is still an impactful tool in Pop music, even though it has been around for centuries. There are a number of well-known key changes in many successful Pop songs of recent musical decades. Modulation like a lot of tonal harmonies involves tension and resolution: we take a few uneasy steps towards the new key and then we settle into it. I find that 21st-century modulation serves as more of a production technique than the compositional technique it served in early Western European art music (this is a conversation for another day…).

 Example of modulation where the same chord exists in both keys with different functions.

 

Nowadays, it often occurs at the start of the final chorus of a song to support a Fibonacci Sequence and mark a dynamic transformation in the story of the song. Although more recent key changes feel like a gimmick, they are still relatively effective and seem to work just fine. However, instead of exploring modern modulation from the perspective of music theory, I want to look into two specific concepts in psychoacoustics: critical bands and auditory scene analysis, and how they are working in two songs with memorable key changes: “Livin’ On A Prayer” by Bon Jovi and “Golden Lady” by Stevie Wonder.

Consonant and dissonant relationships in music are represented mathematically as integer-ratios; however, we also experience consonance and dissonance as neurological sensations. To summarize, when a sound enters our inner ear, a mechanism called the basilar membrane response by oscillating at different locations along the membrane. This mapping process called tonotopicity is maintained in the auditory nerve bundle and essentially helps us identify frequency information. The frequency information devised by the inner ear is organized through auditory filtering that works as a series of band-pass filters, forming critical bands that distinguish the relationships between simultaneous frequencies. To review, two frequencies that are within the same critical band are experienced as “sensory dissonant,” while two frequencies in separate critical bands are experienced as “sensory consonant.” This is a very generalized version of this theory, but it essentially describes how frequencies in nearby harmonics like minor seconds and tritones are interfering with each other in the same critical band, causing frequency masking and roughness.

 

Depiction of two frequencies in the same critical bandwidth.

 

Let’s take a quick look at some important critical bands during the modulation in “Livin’ On A Prayer.” This song is in the key of G (392 Hz at G4) but changes at the final chorus to the key of Bb (466 Hz at Bb4). There are a few things to note in the lead sheet here. The key change is a difference of three semitones, and the tonic notes of both keys are in different critical bands, with G in band 4 (300-400 Hz) and Bb in band 5 (400-510 Hz). Additionally, the chord leading into the key change is D major (293 Hz at D4) with D4 in band 3 (200-300 Hz). Musically, D major’s strongest relationship to the key of Bb is that it is the dominant chord of G, the minor sixth in the key of Bb. Its placement makes sense because previously the chorus starts on the minor sixth in the key of G, which is E minor. Even though it has a weaker relationship to Bb major which kicks off the last chorus, D4 and Bb4 are in different critical bands and if played together would function as a major third and create sensory consonance. Other notes in those chords are in the same critical band: F4 is 349 Hz and F#4 is 370 Hz, placing both frequencies in band 4 and if played together would function as a minor second and cause sensory roughness. There are a lot of perceptual changes in this modulation, and while breaking down critical bands doesn’t necessarily reveal what makes this key change so memorable, it does provide an interesting perspective.

A key change is more than just consonant and dissonant relationships though, and the context provided around the modulation gives us a lot of information about what to expect. This relates to another psychoacoustics concept called auditory scene analysis which describes how we perceive auditory changes in our environment. There are a lot of different elements to auditory scene analysis including attention feedback, localization of sound sources, and grouping by frequency proximity, that all contribute to how we respond to and understand acoustical cues. I’m focusing on the grouping aspect because it offers information on how we follow harmonic changes over time. Many Gestalt principles like proximity and good continuation help us group frequencies that are similar in tone, near each other, or serve our expectations of what’s to come based on what has already happened. For example, when a stream of high notes and low notes is played at a fast tempo, their proximity to each other in time is prioritized, and we hear one stream of tones. However, as this stream slows down, the value in proximity shifts from the closeness in timing to the closeness in pitch, and two streams of different high pitches and low pitches are heard.

 Demonstration of “fission” of two streams of notes based on pitch and tempo.

 

Let’s look at these principles through the lens of “Golden Lady” which has a lot of modulation at the end of the song. As the song refrains about every eight measures, the key changes by a half-step or semitone upwards to the next adjacent key. This occurs quite a few times, and each time the last chord in each key before the modulation is the parallel major seventh of the upcoming minor key. While the modulation is moving upwards by half steps, however, the melody in the song is moving generally downwards by half steps, opposing the direction of the key changes. Even though there are a lot of changes and combating movements happening at this point in the song, we’re able to follow along because we have eight measures to settle into each new key. The grouping priority is on the frequency proximity occurring in the melody rather than the timing of the key changes, making it easier to follow. Furthermore, because there are multiple key changes, the principle of “good continuation” helps us anticipate the next modulation within the context of the song and the experience of the previous modulation. Again, auditory scene analysis doesn’t directly explain every reason for how modulation works in this song, but it gives us ulterior insight into how we’re absorbing the harmonic changes in the music.

Building a Library From Home

A COLLABORATIVE POST BY TESS FOURNIER – SUPERVISING SOUND EDITOR, BOOM BOX POST

As a sound editor, having a well-rounded library is very important. Some of you might be lucky enough to have a library provided to you by your company and others might be wondering where the heck you even start. There are plenty of great libraries out there on the web that you can purchase or download for free with no effort of recording at all but there are also going to be things that you will need to record yourself. A good place to start is by recording small handheld props. Recently we came across needing these types of recordings for a new series. Check out some helpful tips below!

For this blog, I’d like to specifically use the veggies/fruits I recorded as our focus. The main benefit of recording food is being able to snack while you’re doing it. Am I right?

IMG_0211.jpg

You can’t tell me that picture doesn’t make you a little hungry! Alright, let’s break these recordings down into four primary points:

Recording Quality

There are many things to take into consideration when recording sound effects such as your meters, how far you are from the microphone, the noises happening in the room you are recording, etc. To get an in-depth look on some tips for recording, please check out this blog Jacob did! In my case, it was really hot the day I recorded these veggies, but I turned my A/C off for the sake of the recording! I also unplugged my refrigerator because it was making too much noise. Maybe sweating while recording will make you feel like you are working harder too!

Variety

Make sure you have a good amount of variety with whatever you’re recording. You don’t want to get to the editing stage of the records and wish that you had more variations and have to go back to step one and record again. For these veggie records, not only did I get a lot of different foods to prep, I also tried to make sure how I cut them was varied; short, long, sawing motion, ripping, etc.

IMG_0205.jpg

Save the Extras

I didn’t intend on recording and labeling knife-downs and carrots dropping, but they were a by-product of my main recording. Rather than just deleting them, I cleaned and labeled those as well because you never know when they’ll come in handy!

Clean and Properly Label

Lucky for you, Jacob loves to write blogs about recording sound effects! So, check out this blog he did to see how to edit these files. If you are too lazy to click the link, I will give you a short rundown:

Clean up the recordings as best you can by getting rid of background noise/etc. Cut out any excess you don’t need (like me picking up a new potato to record or coughing, etc between takes). Put like sounds together (sawing cuts should be one file, ripping, another, etc). Label properly – for these I labeled them with the type of food, how they were handled/cut (cut, rip, saw, etc).

Recording sound effects can be both fun and stressful at times. My best advice is to make sure you are having a good time and come into it prepared. Start brainstorming ideas and make a list of things to record and then get at them all at once! I’d say it is better to record too much than not enough. Make sure to record PLENTY of food….so you won’t be hungry for the other recordings.

You can read the original blog and more from Boom Box Post Here

 

How to Push your Sound Design to the Max

While Not Stepping on your Mixer’s Toes

We get a lot of questions about how much you should do in your sound design pass versus how much to leave to your mixer. So, although I’ve written a few posts on this topic (such as Whose Job Is It: When Plugin Effects Sound Design vs Mix Choices and Five Things I’ve Learned about Editing from Mixing), I thought it was time for another brush-up.

As some of you may know, I’m a long-time sound designer and supervising sound editor, but I just started mixing a few years ago. While attending mixes as a supervisor definitely gave me a window into best practices for sound design success (aka how to make sure your work actually gets played…audibly), I got a whole new vantage point for what to do (and not do) once I started having to dig through sound design sessions myself! So, while I am a fledgling mixer and you should always speak directly to the mixer working on your project before making decisions or altering your workflow, I feel that I am qualified to share my personal preferences and experiences. Take this as the starting point for a conversation—a window into one mixer’s mind, and hopefully, it will spark great communication with your own mixer.

Below, I’m sharing a few key concepts that there seems to be confusion surrounding in the “who does what” debate. I’ve personally come across these questions or situations, and I’m hoping to spare you the headache of doing any work over due to a lack of communication. Here they are!


EQ

What Not to Do

I was recently the supervisor and mixer on an episode that was almost entirely underwater. My sound effects editor EQ’ed every single water movement, splash, drip, etc. that occurred underwater with a very aggressive low-pass filter. While this made total sense from a realistic sound point of view, it completely demolished any clarity that we might have had and muddied up the entire episode. It was very hard to locate the sound effects in the space and even harder to get them to cut through the dialogue, more or less the music! Unfortunately, this was done destructively with audio-suite on every single file (and there were thousands of them probably). Every single one had to be recut by hand from the library, which was an insanely arduous task.

What to Do Instead

I’m going to say this once, and then please just assume that this is step one for everything below (I’ll spare you the boredom of reading it over and over): STEP ONE IS ALWAYS ASK YOUR MIXER BEFORE YOU START APPLYING ANY EQ.

I think you can safely assume that there’s, at best, an 80% chance that your mixer does not want you to EQ anything. Ever. So always ask before you destructively alter your work. With EQ’ing it’s especially important that the right amount is added given what else is happening in the scene, and clients often have opinions about how much is too much for their sense of clarity in the mix.

The better way to approach EQ is to ask your mixer (again, asking because this may require a change to their mix template which requires their approval) if it would work to place any FX that you think should be EQ’ed on a separate food group with no other FX mixed in. Having all underwater movements on one set of tracks clearly labeled UNDERWATER FX gives your mixer the ability to quickly EQ all of them with just a few keystrokes and knob turns. And then he or she can also very easily change that EQ to mesh well with the music and dialogue or to satisfy a client note. It also means that he or she can put all of those lovely water effects on one VCA and ride that if the clients ask for any global changes to the volume of water FX. Win-win!

The same is true for any batch EQ’ing of FX. I like the “split onto a separate food group of clearly labeled tracks” method for other things, too, like: action happening on the other side of a door or wall, sound effects coming from a TV or radio, or any other time that you would imagine EQ should be applied to a large selection of files. So yes, split it out to make it easy and obvious for your mixer, but no, don’t do it yourself.


Reverb

What Not to Do

Don’t add any environmental reverb. Just don’t do it. Keep in mind that your sound design doesn’t exist in a vacuum. It’s layered on top of dialogue, music, BGs, ambiances, and probably more! What sounds right as a reverb setting to you while working only on your FX definitely won’t be the right choice once everything else has been placed in the mix.

What to Do Instead

Let your mixer decide. If you do it as an effect for one singular moment (I’m thinking something like a hawk screech to establish distance), only process individual files and also provide a clearly marked clean version in the track below. That way, your mixer has the option to use your version, or take it as an indication of what the clients like and redo it with the clean one. But before you go ahead and use reverb as an effect in your sound design, always check in with your supervisor first. He or she will be able to draw on all of their experience on the mix stage, and will be able to let you know if it’s a good idea or not. From my experience, the answer is that it’s almost always NOT a good idea.


Trippy FX

What Not to Do

Say you’re designing the sound for a super trippy sequence like the POV shot for a drugged up character. You may be tempted to add a phaser, some crazy modulation, or any other trippy overall effect to the whole sequence. Don’t do it! That takes all of the fun out of your mixer’s job, and furthermore really ties his or her hands. They need the ability to adjust any effects to also achieve mix clarity when the music and dialogue are added. So it’s always best to let them choose any overall effects!

What to Do Instead

Go for it with weird ambiences, off-the-wall sound choices, and totally different BGs to make it feel like you’re really inside the character’s head. Feel free to process individual files if you think it really adds something—just be sure to also supply the original muted below and named something obvious like “unprocessed.”


Panning

What Not to Do

Don’t spend hours panning all of your work without first speaking to your mixer. Your understanding of panning may be wildly different from what he or she can actually use in the mix. I’ve seen a lot of editors pan things 100% off-screen to the right or left, and I just have to redo all of it. Panning isn’t too difficult or complicated, but it’s really best to be on the same page as your mixer before you start.

What to Do Instead

Some mixers love it if you help out with panning, especially if they’re really under the gun time-wise. Others prefer you leave it to them—so always ask first. If you want to be sure that your spaceship chase sequence zooms in and around your clients during your FX preview, just make sure to ask your mixer first about his/her panning preferences. How far to the L/R do they prefer that you pan things? What about how much into the rears? Do they mind if you do it with the panning bars, or will they only keep it if you use the 5.1 panner/stereo pot?


LFE Tracks

What Not to Do

Don’t cut your LFE tracks while listening on headphones. You may not realize that what you’re putting in the LFE should actually go in our SFX track because it is low in pitch, but not in that rumble-only range. It’s nearly impossible to cut your LFE track without a subwoofer, since true LFE sweeteners in your library will look like they have a standard-sized waveform, but will sound like almost nothing in headphones!

What to Do Instead

Keep in mind that any files that live on the LFE tracks are going to be bused directly to the low-frequency effects generator which can output approximately 3- 120 Hz. That is super low!  So only cut sound effects that have only that frequency information in them, or that you only care to hear that part. Any other mid-range “meat” to the sound will be lost in the mix.

 

Whose Job is It? When Plug-in Effects are Sound Design vs. Mix Choices.

We’ve reached out to our blog readership several times to ask for blog post suggestions.  And surprisingly, this blog suggestion has come up every single time. It seems that there’s a lot of confusion about who should be processing what.  So, I’m going to attempt to break it down for you.  Keep in mind that these are my thoughts on the subject as someone with 12 years of experience as a sound effects editor and supervising sound editor.  In writing this, I’m hoping to clarify the general thought process behind making the distinction between who should process what.  However, if you ever have a specific question on this topic, I would highly encourage you to reach out to your mixer.

Before we get into the specifics of who should process what, I think the first step to understanding this issue is understanding the role of mixer versus sound designer.

UNDERSTANDING THE ROLES

THE MIXER

If we overly simplify the role of the re-recording mixer, I would say that they have three main objectives when it comes to mixing sound effects.  First, they must balance all of the elements together so that everything is clear and the narrative is dynamic.  Second, they must place everything into the stereo or surround space by panning the elements appropriately.  Third, they must place everything into the acoustic space shown on screen by adding reverb, delay, and EQ.

Obviously, there are many other things accomplished in a mix, but these are the absolute bullet points and the most important for you to understand in this particular scenario.

THE SOUND DESIGNER

The sound designer’s job is to create, edit, and sync sound effects to the picture.


BREAKING IT DOWN

EQ

It is the mixer’s job to EQ effects if they are coming from behind a door, are on a television screen, etc.  Basically, anything where all elements should be futzed for any reason.  If this is the case, do your mixer a favor and ask ahead of time if he/she would like you to split those FX out onto “Futz FX” tracks. You’ll totally win brownie points just for asking.  It is important not to do the actual processing in the SFX editorial, as the mixer may want to alter the amount of “futz” that is applied to achieve maximum clarity, depending on what is happening in the rest of the mix.

It is the sound designer’s job to EQ SFX if any particular elements have too much/too little of any frequency to be appropriate for what’s happening on screen.  Do not ever assume that your mixer is going to listen to every single element you cut in a build, and then individually EQ them to make them sound better.  That’s your job!  Or, better yet, don’t choose crappy SFX in the first place!

REVERB/DELAY

It is the mixer’s job to add reverb or delay to all sound effects when appropriate in order to help them to sit within the physical space shown on screen.  For example, he or she may add a bit of reverb to all sound effects which occur while the characters on screen are walking through an underground cave.  Or, he or she may add a bit of reverb and delay to all sound effects when we’re in a narrow but tall canyon.  The mixer would probably choose not to add reverb or delay to any sound effects that occur while a scene plays out in a small closet.

As a sound designer, you should be extremely wary of adding reverb to almost any sound effect.  If you are doing so to help sell that it is occurring in the physical space, check with your mixer first.  Chances are, he or she would rather have full control by adding the reverb themselves.

Sound designers should also use delay fairly sparingly.  This is only a good choice if it is truly a design choice, not a spatial one.  For example, if you are designing a futuristic laser gun blast, you may want to add a very short delay to the sound you’re designing purely for design purposes.

When deciding whether or not to add reverb or delay, always ask yourself whether it is a design choice or a spatial choice.  As long as the reverb/delay has absolutely nothing to do with where the sound effect is occurring, you’re probably in the clear.  But, you may still want to supply a muted version without the effect in the track below, just in case, your mixer finds that the affected one does not play well in the mix.

COMPRESSORS/LIMITERS

Adding compressors or limiters should be the mixer’s job 99% of the time.

The only instance in which I have ever used dynamics processing in my editorial was when a client asked to trigger a pulsing sound effect whenever a particular character spoke (there was a visual pulsing to match).  I used a side chain and gate to do this, but first I had an extensive conversation with my mixer about if he would rather I did this and gave him the tracks, or if he would prefer to set it up himself.  If you are gating any sound effects purely to clean them up, then my recommendation would be to just find a better sound.

PITCH SHIFTING

A mixer does not often pitch shift sound effects unless a client specifically asks that he or she do so.

Thus, pitch shifting almost always falls on the shoulders of the sound designer.  This is because when it comes to sound effects, changing the pitch is almost always a design choice rather than a balance/spatial choice.

MODULATION

A mixer will use modulation effects when processing dialogue sometimes, but it is very uncommon for them to dig into sound effects to use this type of processing.

Most often this type of processing is done purely for design purposes, and thus lands in the wheelhouse of the sound designer.  You should never design something with unprocessed elements, assuming that your mixer will go in and process everything so that it sounds cooler.  It’s the designer’s job to make all of the elements as appropriate as possible to what is on the screen.  So, go ahead and modulate away!

NOISE REDUCTION

Mixers will often employ noise reduction plugins to clean up noisy sounds.  But, this should never be the case with sound effects, since you should be cutting pristine SFX in the first place.

In short, neither of you should be using noise reduction plugins.  If you find yourself reaching for RX while editing sound effects, you should instead reach for a better sound! If you’re dead set on using something that, say, you recorded yourself and is just too perfect to pass up but incredibly noisy, then by all means process it with noise reduction software.  Never assume that your mixer will do this for you.  There’s a much better chance that the offending sound effect will simply be muted in the mix.


ADDITIONAL NOTES

INSERTS VS AUDIOSUITE

I have one final note about inserts versus AudioSuite plug-in use.  Summed up, it’s this: don’t use inserts as an FX editor/sound designer.  Always assume that your mixer is going to grab all of the regions from your tracks and drag them into his or her own tracks within the mix template.  There’s a great chance that your mixer will never even notice that you added an insert.  If you want an effect to play in the mix, then make sure that it’s been printed to your sound files.

AUTOMATION AS EFFECTS

In the same vein, it’s a risky business to create audio effects with automation, such as zany panning or square-wave volume automation.  These may sound really cool, but always give your mixer a heads up ahead of time if you plan to do something like this.  Some mixers automatically delete all of your automation so that they can start fresh.  If there’s any automation that you believe is crucial to the design of a sound, then make sure to mention it before your work gets dragged into the mix template.

Glossary of Sound Effects (Part 1)

One of the major hurdles of becoming a sound effects editor is learning your library.  This means knowing what keywords to search in a given situation as well as building up a mental catalog of “go-to” sounds.

While it is always a good idea to start by looking at the picture and then thinking of descriptive words to search, it helps if you know which words will yield the best results.  This is where onomatopoeia enters the scene.  Onomatopoeia is defined as the formation of a word from a sound associated with what is named (e.g., cuckoo, sizzle).  Following is a beginner’s guide to onomatopoeic sound effects search words.  Some of these terms can be found in any dictionary, and some are unique to sound effect library naming conventions.


crackle – a sound made up of a rapid succession of slight cracking sounds. Also, look up: sizzle, fizz, hiss, crack, snap, fuse, fuze, burn, fire

crash – a sudden loud noise as of something breaking or hitting another object. Also, look up: bang, smash, crack, bump, thud, clatter, clunk, clang, hit

body fall – a sound made by a body falling onto a hard surface. Also, look up: body hit, land

boing – the noise representing the sound of a compressed spring suddenly released. Also look up: bounce, bouncing, bonk, jaw harp

boom – a loud, deep, resonant sound. Also look up: explosion, slam, crash, drum, taiko, rumble

buzz – a humming or murmuring sound made by or similar to that made by an insect. Also look up: hum, drone, insect, neon, fluorescent

chomp – munch or chew vigorously and noisily.  Also, look up: munch, crunch, chew, bite

click – a short, sharp sound as of a switch being operated or of two hard objects coming quickly into contact. Also, look up: clack, snap, pop, tick, clink, switch, button

creak – a harsh scraping or squeaking sound. Also look up: squeak, grate

flutter – the sound of flying unsteadily or hovering by flapping the wings quickly and lightly.  Also look up: beat, flap, quiver, wing

glug – the sound of drinking or pouring (liquid) with a hollow gurgling sound. Also look up: pour, drain

groan – a low creaking or moaning sound when pressure or weight is applied to an object OR an inarticulate sound in response to pain or despair. Also, look up: creak, squeak; moan, cry, whimper

honk – the cry of a wild goose. Also look up: gander, goose

ahoogah – the sound of a particular type of horn.  Also look up: model a, model t, antique horn, bulb horn

jingle – a light ringing sound such as that made by metal objects being shaken together.  Also, look up: clink, chink, tinkle, jangle, chime, sleigh bells

neigh – a characteristic high-pitched sound uttered by a horse. Also, look up: whinny, bray, knicker

poof – used to convey the suddenness with which someone or something disappears.  Also look up: puff

pop – a light explosive sound. Also look up: bubble, cork, jug, thunk

puff – a short, explosive burst of breath or wind.  Also, look up: poof, gust, blast, waft, breeze, breath

rattle – a rapid succession of short, sharp, hard sounds.  Also look up: clatter, clank, clink, clang

ribbit – the characteristic croaking sound of a frog. Also look up: frog, toad, croak

quack – the characteristic harsh sound made by a duck. Also look up: duck, mallard

rustle – a soft, muffled crackling sound like that made by the movement of dry leaves, paper, cloth, or similar material.  Also look up: swish, whisper, movement, mvmt

rumble – a continuous deep, resonant sound.  Also look up: boom, sub, earthquake

scream – a long, loud, piercing cry expressing extreme emotion or pain.  Also, look up: shriek, screech, yell, howl, shout, bellow, bawl, cry, yelp, squeal, wail, squawk

screech – a loud, harsh, piercing cry.  Also, look up: shriek, scream, squeal

skid – an act of skidding or sliding.  Also look up: slide, drag

slurp – a loud sucking sound made while eating or drinking.  Also, look up: suck, drink, straw, lick

splash – a sound made by something striking or falling into liquid.  Also, look up: spatter, bespatter, splatter, bodyfall water

splat – a sound made by a wet object hitting a hard surface.  Also, look up: squish

splatter – splash with a sticky or viscous liquid.  Also, look up: splash, squish, splat, spray

squawk – a loud, harsh, or discordant noise made by a bird or a person.  Also, look up: screech, squeal, shriek, scream, croak, crow, caw, cluck, cackle, hoot, cry, call

squeak – a short, high-pitched sound or cry.   Also, look up: peep, cheep, pipe, squeal, tweet, yelp, whimper, creak

squish – a soft squelching sound. Also look up: splat, splatter

swish – a light sound of an object moving through the air.  Also look up: whoosh, swoosh

swoosh – the sound produced by a sudden rush of air. Also look up: swish, whoosh

thunk – the sound of a cork being pulled out of or placed into a bottle or jug.  Also, look up: pop, cork, jug

twang – a strong ringing sound such as that made by the plucked string of a musical instrument, a released bowstring, or a ruler held steady on one end and plucked from the other.  Also, look up: ruler twang, boing twang, ripple, pluck, violin, guitar

whip crack – the loud and sudden sound of a whip moving faster than the speed of sound, creating a small sonic boom. Also look up: bullwhip, whip, swish, whoosh, swoosh

whoosh – a heavy sound of an object moving through the air. Also look up: swish, swoosh

woof – the sound made by a barking dog.  Also look up: bark, howl, yelp, whimper, dog

yelp – a short sharp cry, especially of pain or alarm.  Also, look up: squeal, shriek, howl, yowl, yell, cry, shout

zap – a sudden burst of energy or sound.  Also look up laser, beam, synth, sci-fi

See the original post here.

 

Five Things I’ve Learned About Editing from Mixing

I have been a sound effects editor and supervising sound editor for a long time now.  But, I have recently begun mixing a television series here at Boom Box Post.  I am enjoying how much I learn each and every time that I sit down at the board, and I am by no means ready to start spouting mixing advice to anyone.  But, I can say that I’ve come to appreciate certain editorial practices (and absolutely abhor others!) through my new vantage point as a mixer.  Things that I thought of like a nice way to make your mixer happy have turned into practices that are essential to me being able to start my mixing day right.  Seriously, these five things can be the difference of hours added to my predub day.  So, here are five editorial practices that I’ve realized are absolutely essential to a smooth mix.

#1: Stick to the template.

In short, don’t add tracks!  Adding tracks to an established template causes numerous headaches for your mixer during the setup, and it’s easy for issues to crop up later without him or her realizing it.  Every time a track is added, your mixer needs to adjust his or her inserts, sends, groups, VCAs, markers, and more.  That is a ton of extra work, and if one of those hasn’t been checked and adjusted before beginning to mix, issues can crop up along the way.

Adding more tracks to your session to squeeze in those 18 dirt debris sound effects that you added to a car peel out is a huge no-no.  I am especially annoyed when I see that tracks were added just to put one or two sound files on them in the entire session.  Your mixer or supervising sound editor has thoroughly thought through the needs of the project before creating your template.  So, if you feel that you need more space to spread out, you probably need to re-think the way you’re approaching your builds (see number four below…).  But, if having more tracks seems absolutely essential to you, make sure that you reach out to your mixer ahead of time and clear the change with him or her.

#2: Cut foley in perspective.

Foley is often one of the things that makes a project really come to life.  It truly helps the action to feel more real.  But, it’s also something that is often mixed so that we feel it instead of truly recognizing it with our ears.  Your mixer probably won’t be using the footsteps to make a sonic statement during a big monologue or music montage.  But, it does often make sense to feature them when characters are moving in or out of a scene.  It helps the audience to track where they are located in the story and aids the flow between shots.

In these instances, the panning is often at least as important than the volume.  And in order to pan people walking, for instance, off screen-right and then immediately into the next shot from screen-left, the foley needs to have been cut for perspective!  I’ve had numerous foley editors say that they’re uncomfortable cutting in perspective because they want to give the mixer options.  But, you’re truly not giving your mixer options.  Instead, you’re tying his or her hands (or, rather, making them need to scoot over to the computer and recut it themselves when they’d rather focus on mixing)!

But perspective cutting for foley can be a bit confusing.   So, let me break it down for you: you should cut your foley in perspective if there is a drastic change in volume necessary, or if characters need to be panned in our out of a shot.  Panning within a shot does not require perspective changes (e.g., a character walks around a room during the same shot).  Zooming in does not require a perspective change (this can be done with a fader move and is not a change between shots).  Here are some examples that would require perspective changes:

  • Perspective change for volume: We start on a long shot of a character dancing on stage, shot from deep in the audience.  Then, we cut into an extreme close-up on his feet.  Bam!  Perspective change!
  • Perspective change for panning: Two characters and standing around talking, and they realize they’re late for an important meeting.  They run off screen-right.  Then, we immediately cut to them running into a different room from screen left.  Give that sucker a perspective change!

#3: Color code your builds.

This is not by any means an industry standard, but I seriously appreciate it. I see it!  Want your mixer to love you now and forever?  Then color-code your builds!  I would recommend color-coding the regions that make up each BG location the same color each time that location is used as well as color-coding the regions within each FX build.

For BGs this is helpful to your mixer because he or she can easily copy and paste the volume automation onto each instance of the same location in just minutes!  This is such a great time-saver for getting to a reasonable starting point on BG balance.

For FX, make sure to color-code your regions according to what the build is covering on-screen rather than the kind of elements they are.  That way, it’s easy for your mixer to identify what to adjust by just glancing at your session (without necessarily soloing every single file).  For example, when cutting a door open, you may have a handle turn, a wood door open, and a long creak.  Color-code all three of those suckers brown!  Extra points go to color-coding something that makes sense for the thing you’re covering (blue for water, brown for a wooden door, yellow for a yellow remote-control truck, etc.).  And make sure that each time that same door opens happens, you color code it the same way.  By doing that, your mixer can easily find a balance he or she likes and then paste it onto every instance.  That makes adjusting it to work in a specific scene so much easier.

#4: Choose fewer, better FX.

Let me say this: more is not better.  Not by a long shot.  Yes, in a lot of cases, you should cut more than one layer to get a textured and full sound without tying the hands of your mixer.  But, you also don’t want to veer too far in the opposite direction and cut way too many elements.  Sound effects editorial is an art-form, and like any true art, it takes forethought and vision to do it well.  That means deciding which layers you want before you start digging through your library, and then editing yourself to create the most robust but clear and simple build possible.  I never start pulling sounds without a game plan, no matter how simple the build might seem.

In general, I like to stick with a rule of three: choose three files max that cover three frequency ranges (low, medium, and high) and also three different sonic textures.  For example, when cutting a steady forest fire, I would choose a low-end rumble element to give it size, a mid-range thick whooshy element (maybe with a little phase for motion) for fullness, and a high-frequency steady crackle to give it motion, life, and to help it poke through the mix without needing to turn the volume way up.  Without a game plan, I might be left throwing in a dozen elements because they seem like good choices.  But with a little forethought, I can easily cut down the number of elements I use and make each one count.  Honestly, it also makes things sound a lot better.

Sticking with the rule of three also helps your mixer!  After all, he or she can easily grab up to four faders (three is even easier!) and adjust the volume without needing to create a group and then disable it after making the adjustment.  So, there’s basically no reason not to cut like this.  It helps you and your mixer to work better, smarter, and faster!

#5: Use clip gain instead of volume automation to balance FX builds.

So, you’ve toiled over creating the perfect balance between your elements in a single build.  And mixers love it when you do some of the work for them!  They’ll definitely want to adjust that balance to make it work within the mix, but having a solid starting point is key.  The problem with adjusting your balance during editorial with volume automation is that as soon as your mixer grabs the faders, that balance is completely erased and replace with whatever his or her fingers do.  So, do yourself and your mixer a favor and balance within builds using clip gain.  That lets your mixer have all faders sitting at zero (and not popping up and down all over the place during playback), and thus each adjustment he or she makes is on top of what you’ve already accomplished.

A few caveats on this:

  • Make sure to use volume automation rather than clip gain when adjusting volume for perspective changes.  Always first balance your build with clip gain, then cut it in perspective and make any volume changes for perspective with the volume bars.
  • Do not ever clip gain a sound down to the point of being inaudible.  That makes it impossible for your mixer to turn up the volume with a fader without seriously compromising the signal-to-noise ratio.  Furthermore, if you find yourself turning anything down that much, just delete it!  You obviously don’t actually like it, and you need the space so you can follow #1 and #4!  Take the opportunity to edit yourself!
  • Do not clip gain BGs.  Use volume bars instead to adjust the balance.  This is a good practice for two reasons: First, BGs often need to be super low in volume, and if you use clip gain, your mixer won’t be able to turn them up enough with the fader.  Second, since these are long, steady elements, it’s nice to see where the volumes are on the faders rather than having them all at zero.  But mostly, this is a signal-to-noise ratio issue.
     

Film Score Mixing with a Team

I was recently at the Banff Centre for Arts and Creativity in Canada to supervise the film score mix of a three-part documentary series (by filmmaker Niobe Thompson and music by composer Darren Fung). We needed to mix over 100 minutes of music – nearly 200 tracks of audio – in about a week. Luckily, we had a large crew available (over ten people and three mix rooms), so we decided to work in an unusual fashion: mixing all three episodes at the same time.

Normally you have one mixer doing the whole score working in the same mix room. Even if he/she mixes on different days (or has assistants doing some of the work), chances are the sound will be pretty similar. It’s a challenge when you have ten mixers with different tastes and ears working in different rooms with different monitors, consoles, control surfaces, etc. What we decided to do was work together for part of the mix to get our general sound then let each group finish independently.

The tracks included orchestra, choir, organ, Taiko drums, percussion, miscellaneous overdubbed instruments and electronic/synth elements. It was recorded/overdubbed the week prior at the Winspear Centre in Edmonton, Alberta. The Pro Tools session came to us mostly edited, so the best performances were already selected, and wrong notes/unwanted noises were edited out (as much as possible). Our first task was to take the edited session and prepare it to be a film score mix session.

When mixing a film score, the final music mix is delivered to a mix stage with tracks summed into groups (called “stems”). For this project, we had stems for orchestra, choir, organ, taiko, percussion, and a couple of others. Each stem needs its own auxes/routing, reverb (isolated from other stems), and record tracks (to sum each of the stems to a new file). I talk about working with stems more in this blog: Why We Don’t Use Buss Compression.

Once the routing and tech were set, we worked on the basic mix. We balanced each of the mics (tackling a group at a time – orchestra, choir, organ, etc.), set pans, reverbs, sends to the subwoofer (since it’s a 5.1 mix for film). In film score mixing, it’s important to keep the center channel as clear as possible. Some tv networks don’t want the center channel used for music at all (if you’re not sure, ask the re-recording mixer who’s doing the final mix). From there, our strategy was to polish a couple of cues that could be used as a reference for mixing the rest. Once our composer gave notes and approved those cues, we made multiple copies of the session file – one for each team to focus on their assigned portion of the music.

Every project has its unique challenges even if it’s recorded really well. When you’re on a tight time schedule, it helps to identify early on what will take extra time or what problems need to be solved. Some parts needed more editing to tighten up against the orchestra (which is very normal when you have overdubs). When the brass played, it bled into most of the orchestra mics (a very common occurrence with orchestral recording). There are usually some spot mics that are problematic – either placed too close or far, pick up unwanted instrument noise, or too much bleed from neighboring instruments. Most of the time you can work around it (masking it with other mics), but it may take more time to mix if you need to feature that mic at some point.

What really makes a film score mix effective is bringing out important musical lines. So, the bulk of the mix work is focused on balance. I think of it like giving an instrument a chance to be the soloist then go back to blending with the ensemble when the solo line is done. Sometimes it’s as easy as bringing a spot mic up a few dB (like a solo part within the orchestra). Sometimes it takes panning the instrument closer to the center or adding a bit of reverb (to make it feel like a soloist in front of the orchestra). Mix choices are more exaggerated in a film score mix because ultimately the score isn’t going to be played alone. There’s dialog sound fx, Foley, and voice-over all competing in the final mix. On top of everything else, it has to work with the picture.

Film score mixing is sort of like mixing an instrumental of a song. The dialog is the equivalent of a lead vocal. I encourage listening in context because what sounds balanced when listening to the score alone may be different than when you listen to your mixdown 10 dB and with dialog. Some instruments are going to stick out too much or conflict with dialog. Other instruments disappear underneath sound fx. Sometimes the re-recording mixer can send you a temp mix to work with, but often all you have is a guide track with rough mics or temp voice-over. Even with that, you can get a general idea how your mix is going to sound and can adjust accordingly.

One unique part of this project was the mix crew was composed of 50% women! Our composer, Darren Fung, put it well when he said, “This is amazing – but it should just be normal.”

Equus: Story of the Horse will debut in Canada in September 2018 on CBC TV “The Nature of Things.” In the US, Equus will air on PBS “Nature” and “Nova” in February 2019. It will also air worldwide in early 2019.

Score Mixers: Matthew Manifould, Alex Bohn, Joaquin Gomez, Esther Gadd, Kseniya Degtyareva, Mariana Hutten, Luisa Pinzon, Jonathan Kaspy, Aleksandra Landsmann, Lilita Dunska

Supervising mixers: James Clemens-Seely and April Tucker

Karol Urban – Sound and Storytelling

Finishing the Mix

Karol Urban CAS MPSE (Grey’s Anatomy, New Girl, Station 19, Band Aid, Breaking 2, #Realityhigh) re-recording mixer, has built a diverse list of mix credits spanning work on feature films, TV series (scripted and unscripted), TV movies, and documentaries over the last 18 years. Describing herself as “part tech geek and creative film nerd” she enjoys using her language skills to work in both English and Spanish.

Karol holds a BS from James Madison University in Audio Post Production from the School of Media Arts and Design, is on the Board of Directors for the Cinema Audio Society (CAS), is co-editor of the CAS Quarterly Magazine, and serves on the Governor’s Peer Group for Audio Mixing for the Television Academy.

While she is incredibly passionate about telling stories through sound, technology, and the art of the craft, her favorite aspect of her position is “the team sport of filmmaking and television production.”

Her enthusiasm and energy for the job help her retain a high work ethic. She is known for being a hard worker in and out of the studio.

What was your path getting into sound?

I was sight impaired as a child and benefited greatly from surgery. I still, however, have problems with depth perception and naturally gravitate toward sound as my primary sense of distance and spatial location.

I studied dance, piano, and voice as a child and went to the Governor’s School for the Performing Arts for high school. It is a public, county-supported, audition-based high school with a focused curriculum on the arts.  I was fortunate to compose and record in my first recording studio there for the first time at the age of 13. I have been hanging out at one studio or another ever since.

Truthfully, I never wanted to perform. But sound and storytelling always fascinated me and held my attention steadfast.  And I have always obsessed over the movies and loved narrative television. When I discovered you could work in sound, not necessarily music, and in sound for picture, I knew what I was going to do with my life.  Every big move in my life I have made since has been to earn the next opportunity to tell a story through sound for picture.

I graduated high school a year early and went on to Virginia Tech at 17 where I took a lot of audio engineering classes. I transferred to James Madison University and majored in the School of Media Arts and Design with an audio concentration and minored in the music industry.  I left school with the clear goal of becoming a re-recording mixer.

If you had to pick your favorite type of content, role or project what would it be and why?

Personal Sound Assistant Sync

The collaborative aspect of what we do is to me the most precious, as a result, I love to be a part of larger teams as the dialog and music re-recording mixer. While it can be fun to do a single-person mix, especially if you have a very creative and collaborative producer or director, I am truly in heaven when I have a creative team behind me.  Bring in the party. I love to craft the story as a collective.

I don’t really have a favorite genre. I love action and sci-fi, and I adore thrillers. Police procedurals are fun. But comedy and drama can be amazing too. I really enjoy the diversity of genres. It widens my toolset. Basically whatever genre I haven’t mixed in a while is my current favorite. I really do love it all.

The creative problem solving and technical aspect of cleaning and repairing dialogue is enjoyable, but I also love the subtle use of dynamics, reflections, and frequency details in dialogue mixing which can help you feel as if you are eavesdropping on a secret or hearing someone lose their composure. It is sneaky in that good dialogue mixing is rarely noticed while it is being most effective.

I also studied classical piano, voice, and composition for many years. I love music. Being able to craft the music into the final mix is a real honor and joy.

That’s why the dialogue/music re-recording chair feels like home.

A lot of people in post-production sound specialize in a single role (like dialog mixer, sound designer, etc.). How has it helped your career to not focus on one particular niche? Or, do you think there is an expectation now to be versatile?

I began my career in the mid-Atlantic region of the East Coast.  There are people who work in post sound are often asked to perform all the roles (Foley recording, narration/ADR recording, Foley/ADR cueing, dialogue editorial, sfx editorial, and re-recording mix).  Even if you were not working on a project as a single person, you and your team would often change roles to suit the schedule or client preferences. It is a different market for sure.

But, when I first got to Los Angeles, folks would advise that being a jack of all trades does not make you qualified to be a master of anyone. When I looked inside myself, I found that I was truly a dialogue-centered individual and macro thinker. I am an extrovert.  I also love the subjective discussions and explorations that occur on the dub stage. All these aspects helped me excel as a dialogue and music re-recording mixer.

But, over the last eight years, I’ve noticed that the ability to diversify is becoming more valued in LA. In this way, I may have chosen the perfect time to come to LA, with a clear, specific goal on what I prefer to center my focus on, but enough diverse experience and knowledge in multiple fields of post sound to be usefully skilled. I gladly switch roles when needed; a change is often good for perspective.

Can you talk about transitioning from working in DC to Los Angeles? Since you didn’t have a job lined up in LA, how did you decide it was time to move?

I am a true believer in the concept that knowledge is power. I had reached a point in DC where I was feeling a little stagnant.  I wasn’t learning as much, wasn’t experimenting as much, and wasn’t challenged enough. I was struggling to find opportunities where I could make myself wonderfully uncomfortable with a challenge. I was searching for mentors.

I found a short, small contract in LA and left a job of 10 years with crazy benefits, paid vacation, and a very decent salary to seek out the challenge. Finding a gig, even one as short as a 3-month contract, while on the other side of the country seemed like a sign.

At the time I was frightened that I wouldn’t be capable of competing in such a large and complex market.  But I knew I would never stop wondering “what if.” Once a few months passed, and I took a couple of professional punches to the face, I recognized I had learned a ton and began noticing a difference in my work. I got excited. There is no other option other than success. Moving to LA has proven to be the most wonderful adventure I have ever had in my life. I love it here. I love the market, the challenges, and the ever-changing, seemingly endless possibilities. There is so much to learn and grow from here. I am grateful.

Can you walk us through an average work week for you? How many hours are you working, spending outside the studio on other work-related demands, etc.?

The amount I actually mix depends on the projects I am on. Sometimes it is 16 hour days and six-day weeks other times it might be two days a week for 9 hours a day. Production schedules move erratically and the day is not over when it is scheduled to end or when you are done… it is over when the client feels whole, and they are done.  My life is a continual game of scheduling Jenga. The terrain is insane. It is awesome and exhausting.

When I am not in the chair mixing, I am still working. Mixing is only part of the job.  I try to be a resource for others as much as possible. I give back to my community through volunteer service in the MPSE, CAS, & TV Academy, edit the CAS Quarterly publication, meet with industry folks new to town, and of course, establish new relationships in the community.  It is a rare day off when I don’t meet up with someone, watch a tutorial on new technology, or volunteer on a project. I keep an ear open for any industry positions available and try to recommend people in my network that I know can tackle the duties and forward their careers.  It is all-encompassing, but I love what I do and I simply never tire of the hustle. Don’t get me wrong, there are days or weeks where I am truly exhausted, but I never dream of doing anything else. I want to be the best I can be, and I feel like I have incredible joy ahead of me in that I have much more growing to do. I am not even close to done.

What are the differences between mixing documentary/reality and scripted?

Depends on your project and your client.

There are certainly workflow and logistical differences, and there also tends to generally be a larger expectation of detail and desire for the school of perspective mixing in scripted media. But the core of what I do is really only made different by the client’s desires and the needs of the film/project.

I certainly will repair, clean, and fit the spec. But the true value in having a re-recording mixer is that you have a professional who is a life-listener and skilled craftsperson. We study and develop sound as a storytelling tool that can steer the minds of the viewers. I certainly have had projects of all genres that demand and expect narrative storytelling in their mix.  I have also had many projects of all genres that look to me for technical audio triage and to emulate their temp track. It is less genre-specific than project-specific.

Can you explain how a 2-person mix works?

 

Karol and Steve Urban on the movie BFFs

There are many ways to work. It depends on the team, the technology, and the project’s scheduled mix time. In the end, however, the goal is to make sound decisions and become four hands and two minds working with the singular focus of intensifying the story through sound. It really is a wonderful way to work.

On a 2-person mix, what are the challenges of working with a mixer you haven’t mixed with before?

Sometimes you don’t know the perspective or tastes of your partner when you are newly paired or the tempo at which they need to work. You have to learn the sensibility of your partner as soon as possible.  Luckily most folks who mix in multiple-seat dub stages are very collaborative and have the ability to morph to the style that works with the team and serves the director or producer. I have certainly been made aware of other ways of looking at things that ended up being the right choice for the project and client at the end of the day.  This difference of perspective can be a complexity and/or a gift.

You’ve mixed over 100 episodes of Grey’s Anatomy. What are the challenges?

We suffer from a lot of set noise as there is a lot of busy scenes with lots of background action…IV stands, gurneys, and of course, paper medical gowns.

You have a reputation for having an incredible work ethic, drive, and energy level. How do you maintain that level of focus? How do you not burn out?

Wow.  That’s a crazy question.  It blows me away that I have a reputation at all. I just keep swimming.

I have had a very specific and pointed goal for a very long time to be a re-recording mixer. It started as soon as I knew the job existed. I knew it was what I was supposed to do. I never took a lot of electives in school or tried a lot of different things professionally because this goal was what I knew I wanted specifically.  I knew it was competitive and I knew I wouldn’t generally look like or come from the same places that a lot of my peers would. I grew up in a town that simply doesn’t have a substantial market for this craft. I knew it was a different world and I was going to have to break in.

Practically every extracurricular activity, club, or group I have participated in has been focused on trying to be in this world. Sound makes sense to me and communicating by putting people in sound spaces is pretty amazing and evocative. I am always trying to make myself worthy and valuable to the opportunity in front of me.

What skills are necessary to do your job?

You have to be at least mildly obsessed with detail, technology, and storytelling. Our jobs are not sprints; they are marathons. You will watch a reel or episode over and over and over again for days, sometimes weeks.  You must remain present and have the ability to fall back into the perspective of a first-time viewer but also switch quickly to the mind of a mixer. You have to be able to see (and feel and hear) the effect of what you are doing while also seeing possibilities.

It is also very helpful to like people and have no ego. It can be hard sometimes because you have to emotionally experience something in order to create and having that emotional response rejected can feel personal. But in the end, you are completing the vision of your director or producer and creating their film/show. You should understand that a person may be inspired by your suggestion at times but may also feel something completely different. Notes are not criticism. They are opportunities.

What are your favorite plugins?

I am loving the Pro Fabfilter Q2 and De-esser right now.  I also love my McDSP SA-2 and NF575. I am still a sucker for Audio Ease’s Speakerphone, and PhoenixVerb is pretty amazing.

What technology are you excited about right now?

I love the new immersive formats. I really feel a naturalness when I hear an environment in Atmos.  And I love the panning precision and full-range reproduction.

What have been the challenges for you as a woman in the field?

It is getting so much better. I have definitely found myself in moments of overt creepiness and absolute inappropriateness. But as the years have gone on whether it be because we as a society are becoming more progressive, other women have paved the way, or because I have become more established, it has gotten much, much better. I just keep forging ahead. I don’t give that crazy a lot of focus. The best thing I can do for equality is to be successful as a woman and be a force for equality by treating everyone around me the way I would want to be treated.  I try to lift others up who share the love of what we do, and I take no mind in their gender, race, or creed.

I still have to discuss my gender as an anomaly from time to time, almost always on a new job and have to occasionally educate people on my knowledge and fandom of a diverse range of genres such as action, horror, and sci-fi. Because as a woman I am often thought of as a strictly romantic comedy or drama person.

But I do have to take care to go out of my way to get to know my co-workers and let them know they can be comfortable around me and that they can be confident that I am an assertive individual. People don’t walk on eggshells around me because I will let someone know if I am uncomfortable or disagree. I hold no grudges and pull no punches. I have been set straight once or twice in my life when I have said something I thought was harmless that had no presumptions behind it that accidentally affected someone in a negative way. We all need to be open to learning from one another without fear or pride. I do believe most people are intrinsically good.

It is paramount to respect your coworkers (male and female alike). While I am aware of situations through the years where I have not been hired because I am a woman or where criticism has been very blatantly gender-biased, I know I am also here in my dream job because of all the wonderful folks, the majority of whom are male, who have given me a shot, had confidence in my abilities and welcomed me into the fold.

It is a weird landscape, ladies.

What advice would you give women in our field?

Be assertive, persistent, and consistent. Respect the contributions of everyone around you from the valet service to reception to account management to your engineer. Show respect and act respectfully. Expect the same in return.  Be humble but also speak and act with confidence and kindness. Some folks really do not recognize what they are saying. Some are uncomfortable or culturally insensitive without knowledge of their actions. Ignorance does still exist. Some folks lack perspective and understanding without intending ill will.  Many people who are considered notoriously challenging that I have worked with were not an issue with me at all because if I had an issue, I stated the issue, explained my issue, asked for a change in behavior, and then dropped it from my memory and became a friend and advocate to them. And while I am not so ignorant or smug as to say it doesn’t matter what others think or do (There is real malice in the world.), I do believe social transformation happens individual by individual. We can be seeds of change by keeping our decisions untarnished by the poor actions of a few and giving each new individual in our world the opportunity to be wonderful.

I believe in equality.  I can’t wait to work in a world where we don’t have to support each other as minorities but we can just support competent, talented artists and craft people and diversity will naturally take place.

If you were to guide someone trying to get into post-production today what advice would you give? What would you advise to find work and build a career?

Don’t wait for someone to give you permission to do what you want to do.  Even if it is for little or no money, get in there. Until you have a professional-level skill to offer, you need to be doing what you can to acquire it. Participate in your community, seek mentors, seek other folks coming up, collaborate, create, rise, and lift up others. Remain open to life lessons. The universe has a lot more opportunities to reward you with when you put yourself out there and participate.

 

How to Communicate About Audio With Non-Audio People

The language we use to do to do our jobs spans across a lot of areas (audio, acoustics, electronics, technology, psychoacoustics, music, film, and more). Our clients, on the other hand, may not have much language to convey what they want. The mix notes I get may be as broad as, “I don’t like that” or “something doesn’t feel right.” My job is to fix it and deliver a product they are happy with but how do you do that without language?

Everyone has preferences for sound even if they don’t have the language to convey it. It takes time to uncover these preferences, and that’s part of our job. It’s like a painting where you can see the outline of what to paint but don’t know what color palette to use. Some people like bright colors and others prefer pastels. Some people know their favorite colors right away, and others want to see you paint a bit then have you change it (and maybe change it again). It takes some trial and error to discover their “color palette,” but once you know it, you can make choices that will likely be in their taste (or at least close enough to have a discussion about it).

In music, this is knowing that the drummer will want less vocals in the monitor before they ask. It’s knowing that the lead singer wants a slap delay on her voice on the album. In post, it’s knowing that a producer wants to hear every footstep or doesn’t like a particular cymbal the composer used in the score. Having this knowledge of someone’s taste builds trust because it lets them know that you understand what they want. It’s what gets you re-hired and over time establishes you as “the” engineer or mixer for that person (or group).

Finding these preferences takes investigation. Our job, in that sense, is like a doctor and a patient. For a doctor, there’s a lot of questions about symptoms, recent health, etc. because the patient doesn’t have the same expertise. The approach is the same here with a mix note: “Are the guitars loud enough for you? Is it something about the dialog bothering you? When you say it doesn’t feel right, is it a balance issue or a timing issue?”

Sometimes the message can get conveyed without using proper language. For example, non-audio people may use the word “echo” to mean reverb. One common note is something is “too loud” or “too soft.” But, the problem might be something else (is it perceived as too loud because it’s bright? Is it too exposed vs. too loud?). With notes, you have to ask yourself: does this need to be taken literally or is it an observation that might be pointing to another issue?

For example, a producer I work with likes to give me a sound design note: “Play around with it.” Does that mean he likes what’s already there but wants more? Or, does he want something totally different? I’ve learned that’s his way of saying “I don’t know what I want or like yet.” I sometimes do more than one version including one out of his normal taste (a different “color palette,” so to speak). I find that helps him define what he likes (or doesn’t) by hearing two contrasting ideas.

You can adapt to a client’s strengths, too. A filmmaker I work with doesn’t know audio well, but he’s very good at conveying moods. We talk about the moods of the film and specific scenes, and I interpret that into audio. He might give a note like, “I want to feel the car wreck.” I know what that means (in audio terms) is he wants a lot of detail in the sound design, and the car crash should be at the forefront of the mix.

Talking about moods is a useful technique with musicians, too. Should it be intimate, polished, rough around the edges, massive, etc.? Do they like clean studio recordings or does it make them uncomfortable? Should it feel like a private living room performance, a rowdy bar or a stadium? You obviously won’t be adding bar patrons to a music mix but knowing that will influence the approach to the mix from vocal treatments to EQs, balances, reverbs, and effects.

Where this gets tricky is when people use words that aren’t audio words at all. Sometimes we can translate or offer other words. “When you say it sounds too ‘shiny’ do you mean it sounds bright or shrill? Or too clean and you want it more gritty?” If someone is struggling to convey what they want, they might be able to think of an example from somewhere else (an album, movie, or Youtube video). It might be totally unrelated, but it can help figure out what they’re asking for.

The most effective way to be a good communicator with clients is to have a diverse audio language yourself. It’s a great skill to talk about audio using words not related to audio. You can make an exercise of this by asking, “what words could I use to explain what I hear?” Walking on leaves could be crispy, crunchy, and noisy but it also could be like Pop Rocks, crinkling paper, or eating cereal. So, the next time a client asks you about the “gaggle of Girl Scouts”** in the mix, you’ll have a better idea what they’re talking about.

(**This was a real note I got from a client. The sound was actually a pan flute.)

X