Empowering the Next Generation of Women in Audio

Join Us

How to Make Tech Easier: Be Prepared

 

In my last blog, I talked about what goes into mixing a Broadway-style musical, and there’s a lot to do. For almost every production you work on, you’ll be expected to mix the show mostly line-by-line with some dynamics and (hopefully) few mistakes from day one. Having a smart layout for your DCAs and a clear script can be the difference between an incredibly stressful or a delightfully smooth tech process.

Once you have the script, first things first: read it. The entire way through. If you don’t have a good idea of what’s going on from the beginning, the rest of the process is going to be guesswork at best. Next, go through the script again, this time with an eye out for where scenes might go; either where a natural scene change happens in the script, or where there are more actors talking than you have faders. (The number of DCAs you’ll have is usually 8 or 12, determined by the console you’re using. DCAs are faders in a programmable bank that can change per scene so you only have the mics you need or can consolidate a group, like a chorus, down to one or two faders.)

There are two common ways of programming DCA’s. The first is a “typewriter” style where you move down the faders in order for each line and if you run out of faders, you take a cue and go back to the first fader, then repeat (i.e. 1, 2, 3, 4, 5, CUE, 1, 2, 3, etc). This is very useful in larger scenes where characters have shorter one-off lines and you quickly move from one character to the next. The second approach is where each principal actress and actor is assigned to a constant fader (Dorothy is always on 1, Scarecrow on 2, Tin Man on 3, Lion on 4, etc), and will always be on that fader when they have dialogue. In shows where you mostly deal with a handful of reoccurring characters, this is friendlier to your brain as muscle memory brings you back to the same place for the same person each time.

As an example, let’s say we have 8 faders for dialogue and take a look at “The Attack on Rue Plumet” from Les Mis (if you want to listen along, it’s the dialogue from the 2010 Cast album for the 25th Anniversary production):

 

A typewriter approach to mixing would assign DCAs in increasing order each time a new character speaks (first lines are highlighted):

By the time we get to Marius, we’re almost out of faders, and there’s a natural change in the scene when Thenardier’s gang runs off and Valjean enters, so it works to take a cue between those two lines and start over with the DCAs.

But Les Mis is an ensemble show that’s centered around a core group of principals, so assigning characters to designated fader numbers is another option. If we’re mapping out the entire show, we find that Valjean, as the protagonist, ends up on (1), Marius, the main love interest, on (2),  and Cosette and Eponine can alternate on (3) as they interact with Marius most frequently, but usually aren’t in scenes together. Thenardier could go a couple of places: he leads in scenes like “Master of the House” and “Dog Eats Dog,” but in scenes with the other principals, he typically takes a secondary role, so we’ll put him on (4) in this scene. The chorus parts, Montparnasse, Claquesous, Brujon, and Babet (first lines are still highlighted below), are easiest to put in typewriter style after Thenardier since they only appear once or twice in the show, so don’t have a designated fader number.

The mix script for this approach would look like this:

 

Here, Thenardier (4) is still right next to his cronies (5), (6), (7), and (8), but is also right next to Eponine (3) for their bits of back-and-forth. The scene change still ends up after Marius’s line, as it’s a natural place to take it, and Cosette replaces Eponine on (3), getting ready for the next scene “One Day More,” where Marius (2) and Cosette (3) will be singing a duet, with Eponine (4) separated, singing her own part.

With this particular scene, neither approach is perfect, as all the characters have multiple lines (and not in the same order every time), but either one would be a legitimate way to set it up.

Typically, you’ll use a combination of both approaches over the course of a show, with one that you default to for scenes that could go either way, like the example. Personally, I like to use a spreadsheet where I can see the entire show and get an overview of what the mix will look like. This makes it easier to spot patterns or adjust potentially awkward changes in assignments. (The colors for major characters in the examples are just visual aids that I added for this blog.)

For example, here’s a layout that’s mostly typewriter. Characters may stay on the same fader for connected scenes, but overall the assignments go in order of lines in a scene:

 

As another example, there is a core group of four actors that are in almost the entire show and a couple of reoccurring supporting roles, so using a designated fader for those characters works much better. There are times that the pattern breaks for a scene or two to switch to typewriter, but largely everyone stays in the same place:

 

Once you have the DCAs planned out, you can start to format a mixing script. The first example from Les Misérables gives a basic version of that: putting numbers next to lines for the DCA assignments, notes for where cues will go, but you will also eventually add in-band moves, effect levels, and other notes.

Personally, I like the majority of my information to be in the left margin, and if I have enough time I’ll retype the script into my own format so I can mess with it as much as I want. My scripts look like this (I thoroughly enjoy color coding!):

 

Each show might have slight differences, but the broad strokes are always the same: cues are in lavender boxes with a blue border (for cues taken off a cue light, the colors are inverted, so blue box with lavender border), band moves are in purple, vocal verb is green, red are mic notes as well as DCA numbers, and yellow is anything that I need to pay attention to or should check.

Here’s another example and an explanation from Allison Ebling from her script for The Bodyguard tour (she’s currently the Head Audio on the 1st National Tour of Anastasia):

 

“One is the top of show sequence which had to be verbally called and on Qlite due to the fact that it was a bit jarring for audiences. (LOUD gunshots and all the lights went off without warning, our preshow announce was played at the scheduled start and downbeat was 5 [minutes] after.) 

The other is a sequence in the second act where I took one cue with the SM, and the rest were on visual. It also has my favorite Q name ever… ‘Jesus Loves a Gunshot.’

I also like reading my script left to right, so I usually end up reformatting them that way.”

And another example and explanation from Mackenzie Ellis (currently the Head Audio on the 1st National Tour of Dear Evan Hansen):

“Here are some from my DEH tour script [Left], and some from the Something Rotten [Right] first national tour, both of which I am/was the A1 for. Both scripts were adapted from the Broadway versions, created by Jarrett Krauss and Cassy Givens, respectively. 

Notes on my formatting:

 

As you can see, there are different styles and endless ways to customize a mixing script. How you arrange or put notations in your script is purely a personal preference, and will constantly evolve as you continue to work on shows. As a note: not only should you be able to read your script, but to be truly functional, it should be clear enough that an emergency cover can execute a passable show in a pinch.

At this point, you have your script ready and a solid plan for how the show will run. If there’s still time before tech, you can start practicing. Practice boards are becoming more and more popular and are incredibly helpful to work out the choreography of a mix. Casecraft makes one that is modeled after the DiGiCo SD7 fader bank. Scott Kuker (most recently the mixer for Be More Chill on Broadway) made a custom, travel-size board for me a couple of years ago that I absolutely love. It immediately became an integral part of learning the mix for both me and my assistants!

I highly recommend getting one if you’re career plans involve mixing theatrical shows, but if you don’t have one, there’s the tried and true option of setting up coins to push as makeshift faders (pennies tend to be a good size, but some prefer quarters). Whatever method you use, the point is to start getting a sense of muscle memory and timing as you work through the show. It also gives you an opportunity to work through complicated or quick scenes, so you get a feel for the choreography or can even look at adjusting the DCA programming to make it easier.

After prepping a script and getting in some practice, walking up to the console in tech doesn’t seem as daunting. If you’re well prepared, you’re able to keep up and adapt to changes faster. Plus, if you’re self-sufficient at the board, your designers can trust you to mix the show and take more time to focus on their job of getting the system and the show the way they want it, which will help you in the long run.

 

Women in Sound Design

An Interview With The Only Two Women Ever Nominated for a Tony Award in Sound Design

This year, Jessica Paz changed history by becoming the first woman to receive a Tony Award nomination for Sound Design for the Musical Hadestown.  Jessica won that Tony, which also gave her the distinction of becoming the first woman to EVER win a sound design Tony in either category. Prior to this, only one other woman, Cricket S. Myers, had received a nomination for Sound Design of the play Bengal Tiger at the Baghdad Zoo in 2011.

The Sound Design category has existed since 2007, and in 12 years, only these two women have received nominations. As we have seen from The EQL Directory, there is no shortage of talented, professional women Sound Designers in this industry, so why is it so difficult for them to get work in Broadway theatre, and more importantly, what can we do to change that dynamic?  In an effort to gain a little more clarity on this subject, and to learn a little more about Jessica and Cricket and their work processes, I decided to go straight to the sources.

Here’s what they had to say

Elisabeth: How long have you been a sound designer, and what first drew you to this industry?

Jessica: I have been a Sound Designer for about 15-20 years, I first started as a mixer for community theater.

Cricket:  I’ve been a Sound Designer for 20 years now. I have always loved theater, I fell in love with the crazy energy backstage and knew that the theater life was for me. I didn’t find sound until much later in life but it fits me well. I started as a physics major before switching to theater, so acoustics and all the technical side of sound comes very naturally to me. But it’s the creative side of sound design that fuels me from day to day.

E:  What were some of the stepping stones that helped you move from regional theatre to Broadway/Off-Broadway?

J:  I answered a job posting for a sound operator for an off-broadway play with LAByrinth theater company, which is how I met one of my mentors who then hired me as an assistant on shows going forward.

C:  I went to grad school at CalArts and graduated in 2003, and started assisting people around town. In 2004 I assisted on a show at SCR that transferred to Manhattan Theater Club, and which gave me my very first Broadway credit. I got my first BIG design (at the Mark Taper Forum) in 2006, there were four sound cues and I was SO excited and so nervous! But I developed an amazing relationship with Center Theater Group, and which lead to me sending a very bold email in 2008. I saw Bengal Tiger on the list of announced shows at the Kirk Douglas Theater, which is operated by CTG. Moises Kauffman was directing and I emailed the production manager, to let her know that if they didn’t already have a sound designer, I was REALLY interested. Sure enough, they set up a meeting with Moises, who is amazing, and I was in. The next year, the show was remounted at the Mark Taper Forum, and then in 2011, it transferred to Broadway with Robin Williams playing our Tiger.

E:  What has been your favorite design, and why?

J:  Hadestown because I feel it’s the show that helped define my style.

C:  I think one of my favorite designs was for a production of Bent at the Mark Taper Forum. Moises Kaufman directed, and the focus was always about storytelling. The whole second act takes place in a concentration camp, with a giant electric fence dominating the stage. I had a series of buzzes and hums that shifted and came and went throughout the act, subtly changing the tension in the room as the characters’ stories developed. There was a lot of collaboration between the lighting designer, Justin Townsend, the scenic designer, Beowulf Boritt, and myself as we created this world and as it changed thought out the show.

E:  What’s the one piece of gear or software you can’t live without?

J:  Apple Mainstage

C:  Well, it’s hard not to say QLab. It changed the way I design. I no longer had to mix things down in headphones, burn it on a CD and then hope it sounds the same in the theater. I can leave sounds as individual files and mix them in the actual theater.

E:  What are your thoughts on the phrase “In show business, it’s all about who you know”?

J:  I think who you know is helpful, but it isn’t the only road to success.

C:  I have found that to be very true! Almost all of my work comes from word of mouth. Someone recommends me, whether it is a director, or a production manager, or another designer. Networking is a huge part of the job. Now, this goes both ways. If you are a pain to work with, or treat someone poorly or do an awful job, that gets “known” very quickly too. Be careful not to burn bridges as you go!

E:  Who are your role models?

J:  Nevin Steinberg, Abe Jacob, Mark Menard

C:  I have been so lucky to have a lot of great mentors and role models! Jon Gottlieb and Drew Dalzell were so incredibly supportive and instrumental in starting my career. I got to assist some amazing designers while working at the Mark Taper Forum, gentlemen such as Darron West, Mark Bennett, Obadiah Eaves, and Paul Prendergast. All of them became role models and were very supportive of me as I found my way in theater.

E:  Cricket, you made theatre history when you became the first woman to be nominated for a Tony for Sound Design.  Jessica, you made theatre history when you became the first woman to win a Tony for Sound Design. Can you describe the feeling you had when you found out the news?

J:  I was elated, not only to have been nominated, to win but because I hope it will inspire other women in the field.

C:  I never thought it would happen! I had a friend, Brian, ask if I was going to wake up at 5:30 am to watch the announcement online and I laughed and said there was absolutely no reason to do that. Brian stated that he would get up and watch for me. I smirked and said “well, call me if you hear my name” Sure enough, 5:38 am, my phone rings and it’s Brian! I admit I continued to check the website ALL DAY long because I was pretty sure the Tony committee would come to their senses and take the nomination away. I was designing an outdoor concert for a middle school that day and the drama teacher spent the entire day grabbing folks and proudly declaring that she had a TONY nominee running her concert! It was surreal and exciting and amazing.

E:  In your opinion, what else needs to happen in the industry to give women in audio an even ground to stand on?

J:  I’m not sure I know the answer to that other than to keep encouraging women to show up and take a seat at the table.

C:  Producers need to take a chance. Hire someone they might not know. Directors have to ask for women designers. Designers need to hire more women as assistants and on the sound crew. And when a designer can’t do a show and is sending recommendations to the producers, include women on that list.

E:  What advice do you have for the next generation of women in audio?

J:  Show up, bring your whole self to the table, do your best work each day, keep learning. Be generous.

C:  Keep kicking ass every day. There’s absolutely no reason why those jobs shouldn’t be yours and don’t let anyone tell you anything different. Treat the folks around you with respect, and they will treat you the same way.

I also asked Cricket why she thought that in the history of the Sound Design Tony Award category, only one other woman besides her had been nominated.  She said, “Because there have only been seven women who have even designed on Broadway, designing less than 20 shows over the past 17 years. With 35-50 shows opening each year on Broadway, women Sound Designers make up a TINY percentage of the hired designers. If women aren’t given the opportunities to design the shows, then how can they get the recognition for them? Producers need to start recognizing the extraordinary talent and experience that women Sound Designers can bring to a show.”

For the past 5 years, Porsche McGovern has been doing a study on gender parity in theatre, and to quote her, she says that “we have been getting closer to gender parity in design in LORT theatres, “albeit very slowly and with a good chunk of caveats….the percentage of she designers in sound design positions only went up by 0.3 percent (in 2019).”

The rest of the statistics from Porsche’s study are equally alarming, and you can read all about the all too slow rise in gender parity in theatre here: Who Designs and Directs in LORT Theatres by Pronoun: 2019

If you are a Producer of theatre, I challenge you to reach outside the box for the 2020-2021 season.  Make it a personal goal to hire at least 50% of your designers and directors OUTSIDE of the “Straight white male” category, and I’m speaking to Producers of ALL theatre, not just Broadway.  Designers, follow Cricket’s advice, and staff your shows with women! Recommend each other, speak up for each other, fight for each other, because we all know that change is not going to happen without our strong voices on the forefront.  And since I can’t share it enough, here’s The EQL Directory to get you started.

 

More Than Line-by-Line

 

Going Beyond the Basics of Mixing

When I started mixing shows in high school—and I use the term “mixing” loosely—I had no idea what I was doing. Which is normal for anyone’s first foray into a new subject, but the problem was that no one else knew either. My training was our TD basically saying, “here’s the board, plug this cable in here, and that’s the mute button,” before he had to rush off to put out another fire somewhere else.

Back then, there were no Youtube videos showing how other people mixed. No articles describing what a mixer’s job entailed. (Even if there were, I wouldn’t have known what terms to put in a Google search to find them!) So I muddled through show by show, and they sounded good enough that I kept going. From high school to a theme park, college shows to local community theatres, and finally eight years on tour, I’ve picked up a new tip or trick or philosophy every step along the way. After over a decade of trial and error, I’m hoping this post can be a jump start for someone else staring down the faders of a console wondering “okay, now what?”

Every sound design and system has a general set of goals for a musical: all the lines and music are clear and the level is enough to be audible but isn’t painfully loud. These parameters make a basic mix.

For Broadway-style musicals, we do what’s called “line-by-line” mixing. This means when someone is talking, her fader comes up and, when she’s done, her fader goes back down, effectively muting her. For example: if actresses A and B are talking, A’s fader is up for her line, then just before B is about to begin her line, B’s fader comes up and A’s fader goes down (once the first line is finished). So the mixer is constantly working throughout the show, bringing faders up and taking them out as actors start and stop talking. Each of these is called a “pickup” and there will be several hundred of them in most shows. Having only the mics open that are necessary for the immediate dialogue helps to eliminate excess noise from the system and prevent audio waves from multiple mics combining (creating phase cancellation or comb filtering which impairs clarity).

You may have noticed that I’ve only talked about using faders so far, and not mute buttons. Using faders allows you to have more control over the mix because the practice of “mixing” with mute buttons assumes that the actors will say each of their lines in the entire show at the same level, which is not realistic. From belting to whispering and everything in between, actors have a dynamic vocal range and faders are far more conducive than mute buttons to make detailed adjustments in the moment. However, when mixing with faders, you have to make sure that your movements are clean and concise. Constantly doing a slow slide into pickups sounds sloppy and may lose the first part of a line, so faders should be brought up and down quickly. (Unless a slow push is an effect or there is a specific reason for it, yes, there are always exceptions.)

So, throughout the show, the mixer is bringing faders up and down for lines, making small adjustments within lines to make sure that the sound of the show is consistent with the design. Yet, that’s only one part of a musical. The other is, obviously, the music. Here the same rules apply. Usually, the band or orchestra is assigned to VCAs or grouped so it’s controlled by one or two faders. When they’re not playing, the faders should be down, and when they are, the mixer is making adjustments with the faders to make sure they stay at the correct level.

The thing to remember at this point is that all these things are happening at the same time. You’re mixing line by line, balancing actor levels with the music, making sure everything stays in an audible, but not eardrum-ripping range. This is the point where you’ve achieved the basic mechanics and can produce an adequate mix. When put into action, it looks something like this:

 

 

A clip from a mix training video for the 2019 National Touring Company of Miss Saigon.

 

But we want more than just an adequate mix, and with a solid foundation under your belt, you can start to focus on the details and subtleties that will continue to improve those skills. Now, full disclosure, I was a complete nerd when I was young (I say that like I’m not now…) and I spent the better part of my childhood reading any book I could get my hands on. As an adult, that has translated into one of my greatest strengths as a mixer: I get stories. Understanding the narrative and emotions of a scene are what help me make intelligent choices of how to manipulate the sound of a show to best convey the story.

Sometimes it’s leaving an actress’s mic up for an ad-lib that has become a routine, or conversely, taking a mic out quicker because that ad-lib pulls your attention from more important information. It could be fading in or out a mic so that an entrance or exit sounds more natural or giving a punchline just a bit of a push to make sure that the audience hears it clearly.

Throughout the entire show, you are using your judgment to shape the sound. Paying attention to what’s going on and the choices the actors are making will help you match the emotion of a scene. Ominous fury and unadulterated rage are both anger. A low chuckle and an earsplitting cackle are both laughs. However, each one sounds completely different. As the mixer, you can give the orchestra an extra push as they swell into an emotional moment, or support an actress enough so that her whisper is audible through the entire house but doesn’t lose its intimacy.

Currently, I’m touring with Mean Girls, and towards the end of the show, Ms. Norbury (the Tina Fey character for those familiar with the movie) gets to cut loose and belt out a solo. Usually, this gets some appreciative cheers from the audience because it’s Norbury’s first time singing and she gets to just GO for it. As the mixer, I help her along by giving her an extra nudge on the fader, but I also give some assistance beforehand. The main character, Cady, sings right before her in a softer, contemplative moment and I keep her mic back just a bit. You can still hear her clearly, but she’s on the quieter side, which gives Norbury an additional edge when she comes in, contrasting Cady’s lyrics with a powerful belt.

Another of my favorite mixing moments is from the Les Mis tour I was on a couple of years ago. During “Empty Chairs at Empty Tables,” Marius is surrounded by the ghosts of his friends who toast him with flickering candles while he mourns their seemingly pointless deaths. The song’s climax comes on the line “Oh my friends, my friends, don’t ask me—” where three things happen at once: the orchestra hits the crest of their crescendo, Marius bites out the sibilant “sk” of “don’t aSK me,” and the student revolutionaries blow out their candles, turning to leave him for good. It’s a stunning visual on its own, but with a little help from the mixer to push into both the orchestral and vocal build, it’s a powerful aural moment as well.

The final and most important part of any mix is: listening. It’s ironic—but maybe unsurprising—that we constantly have to remind ourselves to do the most basic aspect of our job amidst the chaos of all the mechanics. A mix can be technically perfect and still lack heart. It can catch every detail and, in doing so, lose the original story in a sea of noise. It’s a fine line to walk and everyone (and I mean everyone) has an opinion about sound. So, as you hit every pickup, balance everything together, and facilitate the emotions of a scene, make sure you listen to how everything comes together. Pull back the trumpet that decided to go too loud and proud today and is sticking out of the mix. Give the actress who’s getting buried a little push to get her out over the orchestra. When the song reaches its last note and there’s nothing you need to do to help it along, step back and let it resolve.

Combining all these elements should give you a head start on a mix that not only achieves the basic goals of sound design but goes above and beyond to help tell the story. Trust your ears, listen to your designer, and have fun mixing!

Sound Design in Another Medium

Sound Design is creating a world or character purely out of auditory vibrations.  We morph mood and meaning through music and sound effects. As showcased through pieces like Peter and the Wolf by Nikolai Rimsky-Korsakov, the aural medium can tell the story on its own.  More often than not, however, sound design is not a monolith and must integrate with visual mediums.  This opens the door for visual style elements to influence sound design.

When I took Sound Design as a course in college our main textbook was Understanding Comics: The Invisible Art by Scott McCloud.  McCloud boils comics down to its essence, choosing to focus on the assembly of narrative and representation rather than technique.  The philosophy behind choosing what to include and not is similar between the visual and the aural. Elements of design (rhythm, focus, contrast, form, movement) are also shared.  Using McCloud as a guide, take a look in the graphic novel section of the library as research for your next project. But why stop at visual, where else can we find inspiration?

The human body was gifted with several senses, and all of them can be used to evoke emotional responses.  Taste is an experience that occurs over time but is remembered as a static moment, much like a song. That particular meal has a temperature, different flavors competing and complimenting, and overall texture.  A song has dynamics, different instruments with melodies and harmonies, and an overall mood. Maybe the character in that particular film has a favorite meal that defines them. How should the accompanying theme add to the character development?  Think of Pippin singing to Denethor in Return of the King from the Lord of the Rings trilogy, a greasy meal, tomatoes popping and gristle squishing contrasted with Pippen’s haunting ode to his comrades in arms.

Another sense, smell, also shares similarities to sound.  While perfume is just as manufactured as a pop tune, it has the opportunity to provide insight into character design.  Imagine a femme fatale adorned with a power suit, her chosen scent is bound to be as bold as she is. Like sound, it also transforms through time.  When she first enters the scene, her interactions with those around her, and what happens in the wake of her absence correspond to the “top,” “middle,” and “base” portions of the bouquet.  Film cannot capture scent, yet, but the sound design can pick up on the “notes” of her cologne.

I recently have had the opportunity to try my hand at mixing mediums.  In August, I gave birth to a new little SoundGirl, and I wanted to share with her one of my favorite stories:  Roverandom by J.R.R. Tolkein.  I want her to follow along with me but also was have the story available if she was babysat by grandparents.  In my copy, the publishers thoughtfully included prints of Tolkein’s illustrations, and I used those as a guide for a fabric book and a radio play.  The mood and style permeate through the scene designs done in felt, while the narrative and characterizations are explored through sound effects, voice, and music.  Together the confluence is grander than the sum of its parts and makes me a better sound designer.

 

SoundGirls México on sound: check Xpo 2019

SoundGirls is a non-profit organization that seeks to generate a professional network to support mainly women since statistically, we represent the 5 percent of women working in professional music and production industry.

This year, SoundGirls in Mexico broke paradigms and prejudices, thanks to the union of people who chose to break mental boundaries and bet on the path of art, creation, and technology. We wish to thank our sponsors Digico, Klang, Meyer Sound, Dolby, and sound: check Xpo.

Every year since 2015, SoundGirls has been awarded a space within the most important event in the industry currently in Latin America: sound: check Xpo. Thanks to the general director, Jorge Urbano, we have been able to host creators and different experiences for members of the organization and the general public, without distinction of gender.

SoundGirls Mexico started inside sound: check Xpo with a very small space, enough to start the call in CDMX. Each year we have given ourselves the task of generating innovative and unique spaces, being pioneers in the implementation of technology and art, proposing a different theme in each of our participation.

Four years later we are a much stronger structure, and with the help of a team of professionals within the industry, and the support of professional companies as sponsors, we managed to provide an experience with the theme “Immersive Sound.”

Since November 2018, an unconventional idea began, which was to present new technology in Mexico, coupled with the implementation of protocols not used or currently explored in Latin America.

The first challenge was to have the support of companies that carried out the import of the necessary equipment for such an ambitious project. Little by little, the general idea was landed, which was to show immersive sound formats (360 degrees, 3D and Atmos), applied mostly for live sound.

Since we wanted to focus on live sound, the immersive world of monitors was controlled by KLANG, to provide personalized monitoring to the musicians in a binaural format with 3D sound. When I started to think about the mix format for FOH, I faced the biggest challenge in this project, since unfortunately none of the recognized brands within the live immersive sound market wanted to participate, but this was not an obstacle, and I followed with the original idea without diverting my main objective: to make and mix for the first time in Mexico a live show using immersive sound.

In mid-February, the team of Dolby.lab Brazil, headed by Daniel Martins, along with Daniel Castillo, joined the project, allowing us to work with a special and unique team.  Marina Bello (sound engineer), confirmed her assistance as a guide and was in charge of monitors, and as she became more involved in the project, she connected me with Ianina Canalis, an Argentinean sound engineer who  has programmed and designed software to mix FOH in an immersive format applied to live sound (ISSP).

Immediately I contacted Ianina to discuss SoundGirls, and I was surprised to know that she was already a member of the organization for several years. After the Mexico-London videoconferences, it was decided that Ianina would travel to Mexico to present her software (ISSP) and be part of a unique event: mixing for the first time in Mexico and Latin America live sound with an immersive system. Ianina joined the team and was included in the lectures. Check out ISSP here

Shortly afterward, a petition for volunteers inside the booth was launched through the SoundGirls platform, without even saying what would take place inside it. The response was wonderful, many women began to respond to support the event and for the first time, women from the interior of the Mexican Republic and other Latin American countries traveled to Mexico to collaborate

The team of professionals began to be formed, we started collaborative meetings and shared areas of work that would work as guides with the volunteers (a type of mentor for the participants).

The whole month of March, we were in the studio of 3BH to do tests and pre-mix and talk with the adventurous musicians who would play and be mixed in immersive sound.  We decided that to maintain a sweet spot or CLA, with greater coverage and greater definition, all instruments should be digital, except for the voice and bass for some musicians. This idea was to avoid the direct sound coming from the stage (noise pollution), and we could mix in an immersive way the most channels in a 360-degree format.

Along with this sound system, specialized design of lasers were showcased, as well as lights and projection, to generate 3D dimensions for the different senses.

The result that was obtained in all the workshops and seminars at the SoundGirls Venue was thanks to the sum of the knowledge of every one of the people that made this great experience possible, which provided new knowledge and technology and a new way to listen and mix live sound.

Specialists, engineers, students, technicians, artists, and speakers contributed in a great way to boost the industry, looking for new forms of art and challenges.

To each and every one of the participants, thank you!

I want to share with you the fundamental stage that would make sure the entire system would work together. The main challenge for all of us is to unify as much as possible the different immersive sound reproduction systems.

We started with the design of speakers and standards for the different systems:

The standards used in ATMOS (broadcast and cinema) formats are specific and detailed. We must follow a special equalization (depending on the volume of the room), as well as depending on the format (5.1.7.1 & Atmos), we must respect sound pressure by format.

Live Sound – The Immersive System

To have greater coverage, speaker arrays are placed at the same height and distance, preferably from 5 to 7 systems at the front (odd number), with a sound reinforcement on the sides and at the back of the enclosure, covering an area of 360 degrees. Taking this into account, the first thing was to design a system with Dolby.

The sound Design for the venue was designed with Dolby.  Using Lab’s Dolby Audio Room Design Tool (DARDT) software, the speaker arrangement was made in 7.1.4. A total of twelve loudspeakers with discrete outputs (independent signal) were used. Meyer Sound MAPP software was used for the system. Basically, the difference between one and the other was to change the center speaker, for Dolby, it must be at listening height (1.20 meters). For live sound, we used a system suspended in front of the stage, together with the other PA points.

For the processing of the signal, we use two Galileos (Meyer Sound), in which snapshots were programmed to call memories with the different formats that were calibrated, following the corresponding norms (previously mentioned), in this case, 7.1, ATMOS and sound Immersive 360 degrees for the live mix.

Another challenge was not to use an analog snake and replace it with CAT6 ethernet cabling, using REDNET Focusrite interfaces with the digital splitter as preamplifiers, so that our main transmission protocol was DANTE – Audinate, thus avoiding multiple AD / DA conversions.

All the systems were interconnected using a CISCO switch, creating a network where we use all the resources; that is, 64 input channels with 54 digital output channels, to synchronize all the systems, the clock of the FOH (Clock Master) console was used via DANTE.

Pro Tools sessions were reproduced in 7.1 formats and with Dolby Atmos Renderer immersive sound software, to show the home entertainment area, an Integra AV was used to play ATMOS content of a bluray, USB and Apple TV.

In the world of monitors, eight stereo mixes were made with IEM Shure PSM900, using Ultimate Ears IEMS and the binaural 3D KLANG system was used. It is important to mention that no floor monitors were used and the audience had access to an immersive mix using AM2 interfaces from Focusrite.

Finally, a multi-track recording of all the input channels was made, along with an Ambisonics Rode NT-SF1 microphone through UBMADI, using Digico SD12 consoles with DANTE DMI card and a D2 Rack. The wireless microphone system was SHURE AXIEN DIGITAL.

Diagram of the signal flow.

 

Radio Mic Placement in Musicals

Introduction

This month I was asked to give a talk about radio mic placement and vocal reinforcement at the Association of Sound Designers Winter School. This month’s blog is the presentation.

I’ve been working in Theatre sound for over 20 years. First, in musical theatre as a no. 3 and an operator, then at the National Theatre where I was the sound manager for the Lyttelton. Now, I work primarily as a Sound Designer, designing productions for musicals and plays.

I’m here to talk about radio mic placement and how that will affect what you can achieve with the sound of your show. I’m going to talk about different productions I’ve worked on and how I’ve dealt with mic positions in different situations.

I have some pictures of mic placements from shows that I’ve designed, and we’ll talk about the situation for each one as we go.

In the last 40 years, sound technology has been quickly evolving. I think it all started with:

The Sony Walkman

I think we are in a different era for sound design, and it isn’t just because of the new tech that we use, it has to do with the Sony Walkman, invented in 1979. It changed the way we listen. The sound was now delivered to you. Sound was now a personal thing that had gone from mostly being listened to as ‘something over there’ to something that is very much up close and personal.


Noise and volume

Another factor in the changes in sound design is the fans in equipment in the auditorium. Most of the theatres we work in were designed for unamplified voices, but theatre lights, projectors, and air conditioners all make noise, so the background noise we have to compete with has increased.

We are in a noisier world than we use to be in general  Birds now singer louder to cope with being in a city.

Casts are used to wearing radio mics − they wear them at drama school. I don’t think actors project as much as they use to.

Grease

I started my West End career on Grease, at the Dominion Theatre, in 1993. That wasn’t my design, obviously. The Sound Designer was Bobby Aitkin. It was my first exposure to West End sound design, and I stayed backstage on that show for about two years. I learnt the importance of mic placement and how a good operator can hear if a mic has moved. I also learned that you don’t provide vocal foldback for lavalier mics.

You couldn’t see our radio mics. We were a little obsessive about that, considering we were at the Dominion. The stage is huge and most of the audience is quite far away − it does seem a little crazy now. But, we were serious about it, and the lovely wig people put in curls on foreheads so the mics were hidden underneath.

It was a big thing then, not to have the mics visible. We would go around and look at the posters of other shows, pointing out mics to each other if we could see them. We would judge the backstage staff on that.

There was a lot of pride attached to the mics being in a good position for audio, as well as you not being able to see them.

We had a couple of handheld microphones for Greased Lightning and for the mega-mix at the end. It does seem an odd concept not to give vocal foldback to the vocalist, but what they need to get through the number isn’t the same thing as the audience needs to enjoy a good show.  You often have to have a difficult conversation with the vocalist, but it is a good idea.

 

Why can’t you use Lavaliers in Foldback?

Why is that all the lavs that we use are omni-directional? Whatever the singer is hearing the mic is hearing too. It’s easy to see how that can lead to feedback.

On Grease, we had lavs in the hairline. This gave us a consistent distance between the mouth and the microphone, keeping incoming sound levels consistent. We didn’t have any hats, that I remember, so had no trouble there on this production.

Because lavs are omni-directional, putting them in the foldback causes all sorts of problems. In addition, sweat and hair products can get into the mics, causing issues, and they can move.

Loud numbers

There were some loud numbers in the show − Greased Lighting, and the mega-mix at the end − and they were done on handhelds. We had a handheld hidden in the Greased Lightning car and that would be whipped out at the appropriate moment. Then, at the end of the show, there were a couple of handhelds hidden behind the counter in the milk bar which would be whipped out and appear magically in the hands of the performers that needed to use them. We were told we could get away with that because Greased Lighting was a song within the story of the show, so we could get away with that as well.

Handhelds aren’t Omni so that meant we could use them in the foldback. We could turn the volume up for those numbers and get a bigger impact from them. There was also a scene at the prom where we used a Shure 55SH on a stand, plugged into a radio mic transmitter. Because it isn’t an omni-directional mic it could also go into the foldback and be treated like one of the handhelds.

Rent

Often, by the time we get to tech, we have had the band call and then we don’t have the band again until the dress rehearsal. The producers don’t want to pay for all that musician time so we get stuck with keys and, if we’re lucky, a drum kit.

We tech-ed without the full band, but we did have keys and tracks, so there was plenty of time to get to work on the vocals.

I usually start with a quick line-check for level with each cast member and then start the technical rehearsal. I enjoy this part of tech; finding out how hard you can push the mics, working with EQ, setting the compressors. It is a chance to get the vocal system set and working before the band turns up for the dress rehearsal.

And then the band arrives

The band was on stage, at the back, and, although there were some drapes, there wasn’t a great deal of separation between the band and the cast. It was a problem. We started tech and we weren’t getting enough level out of the mics on the cast. There wasn’t the option to hire a load of boom mics − this was a low-budget production at the University of Surrey, and a lot of the mics belonged to the University. So, what could I do? Well, we had to pull the mics down the forehead. You can see in the next photo that the mics are not in the hairline. What seems like a small movement in position made a huge difference to the amount of level we could get from the mics. It didn’t look great but if we had used booms then they would have been very visible as well.

Rent is a rock musical, there are some delicate moments in it, but it chugs along quite loudly at times. Moving the mics down an inch from the hairline helped to make the show work.

Next Month  I will share other types of mics and mic positions and how I have used them to problem solve.   

X