Empowering the Next Generation of Women in Audio

Join Us

Beth O’Leary – Baking a Cake on a Moving Tour Bus

Beth O’Leary is a freelance monitor engineer and PA tech based in the U.K. She has been working in the industry for 11 years and is currently working as a stage and PA tech on the Whitney Houston Hologram Tour. She has toured as a system tech with Arcade Fire, J Cole, the Piano Guys, Paul Weller, a tour featuring Roy Orbison as a hologram. She recently filled in as the monitor engineer for Kylie Minogue and just finished a short run for an AV company in Dubai.

Live Sound was not her first career choice, as Beth was originally attending university for zoology. Although she has always been passionate about music. She remembers the first festival she attended “I remember the first festival I went to (Ozzfest 2002 – the only time they came to Ireland), and the subs moving all the air in my lungs with every kick drum beat. I thought that was such a cool thing to be able to control. When I heard about the student crew in Sheffield it made sense to join.: Join she did and it was there she learned “ everything about sound, lights, lasers, and pyro in exchange for working for free and letting my studies suffer because I was having too much fun with them.”

Her studies did not suffer too much as she graduated with a Masters’s in Zoology, but she would go on to work as a stagehand at local venues, eventually taking sound roles at those venues as well as a couple of audio hire companies. Even though she had no formal training, she would attend as many product training courses for sound and few focused on studio works. She says at the time “real-life experience was more important than exam results when I started, I think it’s changing a bit now. But, it’s still essential to supplement your studies with getting out there and getting your hands dirty.”

By her mid-twenties, she wanted to expand her skills and start working for bigger audio companies. After a lot of silence or “join the queue” replies to her emails asking for work experience from various companies, she met some of the people at SSE at a trade show. She would learn that they are really busy over the festival season and said she was welcome to come to gain experience interning in the warehouse. She remembers arranging to intern for three weeks “I put myself up in a hostel and did some long days putting cables away and generally helping out. A week in, they offered me a place as stage tech on some festivals. I’m pretty sure it’s because one of their regulars had just broken his leg and they needed someone fast! I then spent most summers doing festivals for SSE. After a few years I progressed to doing some touring for them. I now also freelance for Capital Sound (which became part of the SSE group soon after I started working with them!) and Eclipse Staging Services in Dubai, amongst others.”

Can you share with us a gig or show or tour you are proud of?  

I baked a cake on a moving tour bus once, I’m very proud of that…

Apart from that, I used to run radio mics for an awards show for a major corporate client. Each presenter was only on stage for a couple of minutes, but the production manager didn’t like the look of lectern mics or handhelds, so everyone had to wear headsets. Of course, we didn’t have the budget or RF spectrum space to give everyone a mic that they could wear all night, we needed to reuse each one three or four times. I put a lot of work into assessing the script and assigning mics in a way that would minimise changes and give the most time between changes. I then ran around all night, sometimes only getting the mics fitted with seconds to go. I always made sure to take the time to talk to the presenters through what I was doing (and warned them about my cold hands!) and make sure they were comfortable. I did the same show for about five years and was proud that the clients, most of whom were the top executives for a very large corporation, were always happy to see me, and asked where I was by name when I couldn’t make it. Knowing that the clients appreciate you is a great feeling.

Can you share a gig that you failed out, and what you learned from it: 

I was doing FoH on a different corporate job, the first (and last) gig for a new company. I had terrible ringing and feedback on the lav mics. It was one of those rooms where it will still ring, even if you take that frequency out wherever you can. I worked on it all through the rehearsal day, staying late and coming in early on the show day, trying to fix it. I did most of the ringing out while the client wasn’t in the room, so as not to disturb them. I asked the other engineers in other rooms for advice, and probably followed my in-house guy’s lead a bit too much. I figured he knew the room the best of anyone, but in hindsight, he wasn’t great. The show happened, and the client was smiling and pleasant, but it definitely could have been better.

Afterward, I got an email from the company saying the client had complained to them about my attitude. I was devastated. I had worked as hard as I could, and I pride myself on always being as polite as possible! I realised too late that from the client’s point of view, they saw an issue that didn’t get fixed for a long time, and they didn’t see most of the work I put in or know what was going on. I learned that it is so important to take a couple of minutes to keep your client in the loop and let them know you’re doing your best to fix the issue, without going overboard with excuses. It can be hard to prioritise when you’re so focused on troubleshooting and you don’t have much time. I still have to work on it sometimes, but it can mean the difference between keeping and losing a gig.

What do you like best about touring?

The sense of achievement when you get into a good flow. So few people realise how much work is involved. For arena shows, we arrive in the morning to a completely empty room, we bring absolutely everything except the seats. We build a show, hopefully, give the audience a great time, then put it all back in trucks and do it all again the next day.

What do you like least?

When the show doesn’t go as well as it could. There’s no second take if something goes wrong that’s it and you can’t go back and change it. It’s quite difficult not to dwell on it. All you can do is make sure it’s better next time.

What is your favorite day off activity? 

I love exploring the cities we’re in. My perfect day off would be a relaxed brunch with good coffee, then a walk around a botanical garden, a bath and an early night. Rock and roll!

What are your long-term goals?

I need variety, so I’d like to stay busy while mixing it up. Touring and festivals, music and corporate shows working with different artists and techs. I’d also like to get to a position where I can recommend promising people more and help them up the ladder.

What if any obstacles or barriers have you faced?

I think one of the major barriers in the industry is people denying any barriers exist. I was told I needed a thicker skin, to toughen up, everyone has it rough. Then after years of keeping my head down and working hard, I saw how my male colleagues reacted to words or behaviour that didn’t even register as unusual to me anymore. Their indignation at what I saw all the time really underscored how differently they get treated.

Thankfully I have done plenty of jobs with no sexism at all, but it can be frustrating to get told I don’t understand my own life. Just because you don’t see what you consider to be discrimination, doesn’t mean it never happens. It can be particularly disappointing when young women are outspoken about how sexism isn’t a problem, ignoring the groundwork set by the tough women who came before them.

I have also struggled a lot with a lack of self-confidence, which can really put you at a disadvantage when you’re a freelancer. You need to be able to sell yourself and reassure your client they’re in safe hands, so I’m sure the self-deprecation that comes naturally to me has held me back.

How have you dealt with them?

I try to give people the benefit of the doubt as much as possible. Whether I misunderstood their intentions or they’re honestly mistaken, or they genuinely don’t want to work with a woman, all I can do is remain professional and courteous and do my job to the best of my ability. A lot of the time we get past it and have a good gig, and if we don’t I know I did all I could. I take people’s denial of sexism as a good sign, in a way. It shows it is becoming less pervasive and I hope the young women who are so adamant it doesn’t happen are never proven wrong.

I’m still working on my self-confidence. I try to remember that the client needs to trust me to relax and have a good gig themselves. I aim to keep a realistic assessment of my skill level. I used to turn jobs down if I wasn’t 100% sure I knew everything about every bit of equipment, for the good of the gig. I then realised that a lot of the time the client wouldn’t find someone better, they’d just find someone more cocksure who was happy to give it a go. Now I’m experienced enough to know whether I can take a job on and make it work even if it means learning some new skills, or whether I should leave it to someone more suitable.

The advice you have for other women and young women who wish to enter the field?

Be specific when looking for help. If you want to tour, please don’t ask people “to go on tour”. Pick a specialism, work at it, get really good, then you might go on tour doing that job. When I see posts online looking for “opportunities in sound”, I ignore them. What area? Live music? Theatre? Studio? Film? Game audio? What country, even? Saying “I don’t mind” will make people switch off. People looking to tour when they don’t even know which department they want to work in makes me think they just want a paid holiday hanging out with a band.

Most jobs in this field are given by word of mouth and personal recommendations. Networking is an essential skill, but it doesn’t have to mean being fake and obsequious. The best way to network is to be genuinely happy to see your colleagues, and interested in them as people. And always remember you’re only as good as your last gig. You never know where each one will lead, so make the effort every time.

People who run hire companies are incredibly busy, and constantly dealing with disorganised clients and/or very disorganised themselves. Don’t be disheartened if they don’t reply when you contact them. Keep trying, or get a friend who already knows them to introduce you so you stand out from the dozens of CVs they get sent every week. Make it easy for employers. You are not a project they want to work on. Training takes time and money. They don’t want to know you’re inexperienced but eager to learn. Show them how you can already do the basic jobs, and have the right attitude to progress on your own.

Must have skills?

Number one is a good work ethic. You can learn everything else as you go along, but if you aren’t motivated to constantly pester employers until they give you a chance, turn up, work hard and help the other techs, all the academic knowledge in the world won’t help you.

Being easy to get on with is also essential. We can spend 24 hours a day with our colleagues, often on little sleep, working to tight schedules and people can get grumpy. Someone who can remember all the Dante IP addresses by heart but is arrogant and rude won’t go as far as someone who can admit they don’t know things, but is willing to ask questions or just Google it, then laugh at themselves later.

Staying calm under pressure, communicating clearly and being able to think logically are all needed for troubleshooting.

Anyone who tells you that having a musical ear is determined at birth is just patting themselves on the back. Listen to music, practise picking certain instruments out and think about how it’s put together. Critical listening can be learned and improved, even if you have to work at it more than some others.

Favorite gear?

Gadget wise, I love my dbBox2. It’s a signal generator and headphone amp in one and produces analog, AES and midi signals so it helps with so many troubleshooting situations and saves so much time.

I use my RF Explorer a lot to get a better idea of the RF throughout a venue and can use it to track down problem areas or equipment.

As far as desks go, I don’t have loyalty to a particular brand. They all have their advantages. I still have a soft spot for the Soundcraft Vi6 because that’s what I used in house for years. DiGiCo seems pretty intuitive to me and has a lot of convenient features. I spent most of the last year using an SSL L500. It sounds fantastic and has a lot of cool stuff to explore.

Parting Words

It can take a long time to break into this industry. I had been doing sound for nine years before I went on a tour, and then didn’t do much touring again for a couple of years after that. You have to be tenacious and patient. However, if you find yourself in a situation where you aren’t progressing, or the work environment is toxic, leave. As a freelancer, you shouldn’t rely too heavily on one client anyway. And that’s what they are: clients. When a friend pointed out these people aren’t your bosses, they’re your clients, it really helped me to change my approach. I now rely less on them for support, but I’m also free to prioritise favoured clients over others. Live sound can be rough around the edges, but there’s a difference between joking around and bullying. There’s a difference between paying your dues and stagnating. If you’ve been in a few negative crews it can be easy to believe that everywhere is like that, but it isn’t. Keep looking for the good ones, because they do exist.

The SoundGirls Podcast – Beth O’Leary: Freelancing, blogs, and sexism

Find More Profiles on The Five Percent

Profiles of Women in Audio

Mix With the Masters Scholarships Marcella Araica and Producer Danja

SoundGirls Members have the chance to receive a 1000€ (euro) scholarship provided to SoundGirls members from Mix With The Masters. There are three scholarships available for the week-long session with award-winning engineer Marcella Araica & Producer Danja.

This is a week-long seminar valued at 4,000€ and includes lectures and workshops, accommodation within the mansion, catering (breakfast, lunch, dinner) the fitness room, swimming pool and shuttles from Avignon to the studio.

You must have an advanced understanding of audio and work as producer/mixer/engineer to attend Mix with the Masters.

Session Dates: March 24 -30, 2020

Apply for the scholarships here

Deadline to apply is March 6, 2020

You are responsible for Travel to France and the remainder of the balance to Mix with the Masters.

Session Includes

  • private bedroom, on-site within the mansion for 6 nights
  • Full-board accommodation with meals prepared by gourmet chefs on-site
  • Return shuttle services from Avignon to Studios La Fabrique
  • Unlimited drinks and snacks throughout the week
  • Approximately 50 hours in the studio with the guest speaker
  • One-on-one time between you and the master to assess and work on your own material
  • Professional photography done throughout the week, including portrait shots of you with the Master
  • Hundreds of full-resolution photos shared with you afterward via a download link, to keep and use as you please
  • A certificate of completion issued on behalf of Mix With the Masters and Studios La Fabrique, signed by the Master if you wish
  • Exclusive MWTM merchandise given only to seminar attendees: embossed Moleskine notepads, pens, mugs, t-shirts, USB keys, and stickers.
  • Use of the La Fabrique swimming pool, garden, fitness centre, and scenic walks
  • Nearby access to the enchanting town of St. Rémy de Provence

     Marcella “Ms. Lago” Araica, has swiftly burgeoned into a towering beacon of talent as one of the music industry’s hottest, most prolific sound engineers. Credited for mixing over one hundred chart-topping tunes, Marcella has had the opportunity of working with world-renowned musical icons such as Beyoncé, Britney Spears, Madonna, Nelly Furtado, Usher, Joe Jonas, and Missy Elliot, along with super producers Timbaland, Danja, and Polow Da Don. In just a few short years, this musical mastermind has already accomplished what most strive to achieve in a lifetime.

    Nate “Danja” Hills is one of the most sought after writers and producers in pop music today and is a two-time Grammy Award winner, four-time Grammy nominee and SESAC “Songwriter of the Year” in 2007, 2008 and 2010. Danja boasts a catalog that features twelve #1 Billboard Singles including “SexyBack,” “My Love,” “Lovestoned” and “What Goes Around Comes Around” by Justin Timberlake, “Promiscuous” and “Say It Right” by Nelly Furtado, “Give It to Me” and “The Way I Are” by Timbaland, “Gimme More” by Britney Spears, “4 Minutes” by Madonna, “Sober” by Pink and “Knock You Down” by Keri Hilson. In addition, Danja has written and produced songs for a who’s who of popular music including, among others, 50 Cent, Bjork, Ciara, Diddy, DJ Khaled, Duran Duran, Jennifer Lopez, Jo Jo, Katherine McPhee, Mariah Carey, Rick Ross, Snoop Dogg, T.I., T-Pain and Usher


Program

The process of greatness fostering greatness has long been recognized and is the reason why masterclasses are organized. The Mix With The Master’s seminars is part of this tradition, offering an exchange of in-depth first-hand studio experience and knowledge that is unparalleled and not available anywhere else. Each seminar is conducted by one of the world’s top music mixers and producers, ready to share their professional secrets with a select group of a maximum of 14 carefully-screened, professional-level participants, who come from all over the world.

One factor that contributes to the enormous success of the seminars is that all tutors support the general MWTM ethos, which is about the love of music, music technology and wanting to help others. Participants also are in part selected on displaying similar, positive attitudes. The fact that the seminars last a full week is another major contributory factor because it offers tutors the time and space to go into real depth, and the participants the opportunity to spend a prolonged time watching a master at his peak, and to ask any question they can think of.

The tutors share exclusive, insider-information on any subject: detailed technical knowledge, how to run sessions, how to handle artists, how to manage a career, the right attitude, how to remain successful, and more. The tutors also assess the work of the participants, by listening to their mixes and mixing recording sessions that they bring, and providing extensive feedback to each participant on where they are at, and how they can get to where they want to be. This is invaluable and offers participants wanting to become world-class professionals in their own right a unique advantage.

Another primary factor in making the MWTM seminars exceptional is that they take place at La Fabrique, a large, comfortable, high-end recording studio located in a picturesque historic building, surrounded by huge, lush grounds, and set in the south-east for France in one of the world’s most beautiful environments. The secluded and idyllic location offers the participants and tutors a lot of space to relax and recharge, far away from the hustle and bustle of daily life and the all-demanding intensity of their regular professional environments.

Because the courses are residential, the participants and tutor work, eat, socialize, and sleep in the same environment. While tutors, and participants, will at times opt to retire to their private quarters, there is ample opportunity for social interaction outside of the studio environment. Participants interact extensively with each other and the tutor, making it easier to assimilate the intangible qualities necessary to be successful at the highest level—presence, focus, social skills, intelligence, creativity, the right attitude, and so on.

In short, for seven days participants can experience mixing with a master in both senses of the phrase, mixing and interacting with them. Get more information about Studio La Fabrique

 

 

How to Push your Sound Design to the Max

While Not Stepping on your Mixer’s Toes

We get a lot of questions about how much you should do in your sound design pass versus how much to leave to your mixer. So, although I’ve written a few posts on this topic (such as Whose Job Is It: When Plugin Effects Sound Design vs Mix Choices and Five Things I’ve Learned about Editing from Mixing), I thought it was time for another brush-up.

As some of you may know, I’m a long-time sound designer and supervising sound editor, but I just started mixing a few years ago. While attending mixes as a supervisor definitely gave me a window into best practices for sound design success (aka how to make sure your work actually gets played…audibly), I got a whole new vantage point for what to do (and not do) once I started having to dig through sound design sessions myself! So, while I am a fledgling mixer and you should always speak directly to the mixer working on your project before making decisions or altering your workflow, I feel that I am qualified to share my personal preferences and experiences. Take this as the starting point for a conversation—a window into one mixer’s mind, and hopefully, it will spark great communication with your own mixer.

Below, I’m sharing a few key concepts that there seems to be confusion surrounding in the “who does what” debate. I’ve personally come across these questions or situations, and I’m hoping to spare you the headache of doing any work over due to a lack of communication. Here they are!


EQ

What Not to Do

I was recently the supervisor and mixer on an episode that was almost entirely underwater. My sound effects editor EQ’ed every single water movement, splash, drip, etc. that occurred underwater with a very aggressive low-pass filter. While this made total sense from a realistic sound point of view, it completely demolished any clarity that we might have had and muddied up the entire episode. It was very hard to locate the sound effects in the space and even harder to get them to cut through the dialogue, more or less the music! Unfortunately, this was done destructively with audio-suite on every single file (and there were thousands of them probably). Every single one had to be recut by hand from the library, which was an insanely arduous task.

What to Do Instead

I’m going to say this once, and then please just assume that this is step one for everything below (I’ll spare you the boredom of reading it over and over): STEP ONE IS ALWAYS ASK YOUR MIXER BEFORE YOU START APPLYING ANY EQ.

I think you can safely assume that there’s, at best, an 80% chance that your mixer does not want you to EQ anything. Ever. So always ask before you destructively alter your work. With EQ’ing it’s especially important that the right amount is added given what else is happening in the scene, and clients often have opinions about how much is too much for their sense of clarity in the mix.

The better way to approach EQ is to ask your mixer (again, asking because this may require a change to their mix template which requires their approval) if it would work to place any FX that you think should be EQ’ed on a separate food group with no other FX mixed in. Having all underwater movements on one set of tracks clearly labeled UNDERWATER FX gives your mixer the ability to quickly EQ all of them with just a few keystrokes and knob turns. And then he or she can also very easily change that EQ to mesh well with the music and dialogue or to satisfy a client note. It also means that he or she can put all of those lovely water effects on one VCA and ride that if the clients ask for any global changes to the volume of water FX. Win-win!

The same is true for any batch EQ’ing of FX. I like the “split onto a separate food group of clearly labeled tracks” method for other things, too, like: action happening on the other side of a door or wall, sound effects coming from a TV or radio, or any other time that you would imagine EQ should be applied to a large selection of files. So yes, split it out to make it easy and obvious for your mixer, but no, don’t do it yourself.


Reverb

What Not to Do

Don’t add any environmental reverb. Just don’t do it. Keep in mind that your sound design doesn’t exist in a vacuum. It’s layered on top of dialogue, music, BGs, ambiances, and probably more! What sounds right as a reverb setting to you while working only on your FX definitely won’t be the right choice once everything else has been placed in the mix.

What to Do Instead

Let your mixer decide. If you do it as an effect for one singular moment (I’m thinking something like a hawk screech to establish distance), only process individual files and also provide a clearly marked clean version in the track below. That way, your mixer has the option to use your version, or take it as an indication of what the clients like and redo it with the clean one. But before you go ahead and use reverb as an effect in your sound design, always check in with your supervisor first. He or she will be able to draw on all of their experience on the mix stage, and will be able to let you know if it’s a good idea or not. From my experience, the answer is that it’s almost always NOT a good idea.


Trippy FX

What Not to Do

Say you’re designing the sound for a super trippy sequence like the POV shot for a drugged up character. You may be tempted to add a phaser, some crazy modulation, or any other trippy overall effect to the whole sequence. Don’t do it! That takes all of the fun out of your mixer’s job, and furthermore really ties his or her hands. They need the ability to adjust any effects to also achieve mix clarity when the music and dialogue are added. So it’s always best to let them choose any overall effects!

What to Do Instead

Go for it with weird ambiences, off-the-wall sound choices, and totally different BGs to make it feel like you’re really inside the character’s head. Feel free to process individual files if you think it really adds something—just be sure to also supply the original muted below and named something obvious like “unprocessed.”


Panning

What Not to Do

Don’t spend hours panning all of your work without first speaking to your mixer. Your understanding of panning may be wildly different from what he or she can actually use in the mix. I’ve seen a lot of editors pan things 100% off-screen to the right or left, and I just have to redo all of it. Panning isn’t too difficult or complicated, but it’s really best to be on the same page as your mixer before you start.

What to Do Instead

Some mixers love it if you help out with panning, especially if they’re really under the gun time-wise. Others prefer you leave it to them—so always ask first. If you want to be sure that your spaceship chase sequence zooms in and around your clients during your FX preview, just make sure to ask your mixer first about his/her panning preferences. How far to the L/R do they prefer that you pan things? What about how much into the rears? Do they mind if you do it with the panning bars, or will they only keep it if you use the 5.1 panner/stereo pot?


LFE Tracks

What Not to Do

Don’t cut your LFE tracks while listening on headphones. You may not realize that what you’re putting in the LFE should actually go in our SFX track because it is low in pitch, but not in that rumble-only range. It’s nearly impossible to cut your LFE track without a subwoofer, since true LFE sweeteners in your library will look like they have a standard-sized waveform, but will sound like almost nothing in headphones!

What to Do Instead

Keep in mind that any files that live on the LFE tracks are going to be bused directly to the low-frequency effects generator which can output approximately 3- 120 Hz. That is super low!  So only cut sound effects that have only that frequency information in them, or that you only care to hear that part. Any other mid-range “meat” to the sound will be lost in the mix.

 

A Beginners Guide to Wireless Frequencies 

Learning about and using wireless equipment can be overwhelming – there are a lot of differences from traditional gear and rather importantly there are strict rules around using radio frequencies that vary from country to country.

How does wireless equipment work?

 

Wired microphones convert sound into an electrical signal. This is sent through the wire to the sound system. Wireless microphones, however, convert sound into radio signals. This signal is then sent from a transmitter to a receiver which sends it to the sound system. The transmitter is a device that converts the audio signal into a radio signal and broadcasts it through an antenna.

Transmitters are small clip-on packs or in the case of handheld wireless microphones, they are built into the design of the handle. All wireless transmitters generally use a 9-volt battery. The receiver is tuned to receive the radio waves from the transmitter and convert it back into an audio signal. This means that the output of the receiver is just like a traditional wired signal. The balanced audio signal from the receiver output is then connected via an XLR to a typical input in a sound system.

There are a few different kinds of antennae on receivers – single and diversity. Single antenna receivers have one receiving antenna and one tuner but these can be prone to dropping out or getting interruptions in the signal. Diversity receivers, however, perform better as they have two separate antennas and two separate tuners. This means the receiver will automatically choose the best of the two signals, sometimes using a blend of both. This reduces the chance of a drop out because the likelihood is high that one antenna will be receiving a clean signal.

What frequency should I use for my equipment?

This is one of the trickiest areas to cover with wireless equipment because it depends on a lot of factors. Some frequency bands work brilliantly for speech but not for music, and some bands are simply too small to fit in lots of audio channels for a larger group. Some are prone to interference due to being license-free, popular bands and it can be a minefield working out where to begin.

When deciding what band to use, firstly it is good to know that each performer/person that is using wireless in the same location needs to be using a different frequency. It’s good practice to set up the receiver with a blank channel in between or a spacing of 0.25Hz increments on the receiver. Secondly, it’s important to know which spectrum band is suitable and legal to use for your venue – this will depend on the number of wireless devices you’re using, where you are in the world, and if you are moving around or touring with the same equipment. Wireless devices include “low power auxiliary station” equipment such as IEMs, wireless audio instrument links, and wireless cueing equipment, which all have the same rules as wireless microphones. Though not fully extensive, a guide to the available frequency rules of most countries can be found at Frequencies for wireless microphones

There are different areas of the radio frequency spectrum that we are allowed to use for wireless equipment but some are more suitable and better than others, and these are constantly changing, which makes it a hot topic for discussion. It’s useful to remember that the frequency spectrum works in the same way as physical space, in that it has a finite amount of room to be shared. The company Shure has strong concerns, particularly about the ever-decreasing UHF band in the Netherlands and has set up a site to raise awareness at www.losingyourvoice.co.uk

 

The UHF band is the preferred spectrum for wireless equipment however this is getting smaller for wireless use all the time. Ultra-high frequency (UHF) is the ITU designation for radio frequencies in the range between 300 megahertz (MHz) and 3 gigahertz (GHz), also known as the decimetre band as the wavelengths range from one meter to one-tenth of a metre (one decimetre).

Most places including the UK and the USA have overhauled their UHF frequency ranges in recent years due to the digitisation of television, freeing up the old analogue frequencies. Originally analogue television transmitted in the 400-800MHz range had been separated into 8MHz “channels” and these refer to a particular frequency range.

Channel 38 is the spectrum of 606.5 – 613.5MHz and is a popular choice in the UK. Governing body Ofcom requires customers to purchase a yearly UHF UK Wireless Microphone Licence to use Channel 38. A flexible license means that owners are allowed to use radio microphone systems in any location. Channel 38 is a shared space and is large enough for 12 radio microphone systems, however, the downside is that if wireless equipment is tuned to the alternative Channel 70 it cannot then return to Channel 38.

Channel 70 is the band of 863 – 865MHz and this is free to use for radio microphone equipment in the UK. This spectrum is so small that it can be difficult to fit many systems into this space. Additionally, if other users nearby are also trying to use this space it can cause interference. Another issue with Channel 70 is that there is no “buffer” range at the lower end as 4G transmission lives immediately below 863MHz which can cause interference.

The band of what used to be Channel 69 (833-862MHz) is illegal to use since its’ digital auctioning in 2013 and it was replaced with Channel 38 for wireless equipment. Because of these challenges, Channel 70 may not be the best solution for larger setups requiring more space.

In the USA there are similar changes coming into place courtesy of the FCC which is the US governing body. The latest changes include the bands 617 – 652 and 663 – 698MHz which will be banned from wireless use as of July 13 2020. The move away from the 600MHz band is due to channels 38-51 in this spectrum being auctioned to television stations. This means that after July 2020 the available frequencies for wireless will include some frequencies on TV channels 2-36 below 608MHz, 614 – 616MHz, 653 – 657MHz, and 657 – 663MHz. Though this may seem like a current transition, this has been in progress for some time – the use of band 698 – 806MHz has been prohibited by the FCC since 2010 as this was repurposed for licensed commercial wireless services and public-safety networks.

What other frequency options am I allowed to use if the UHF range isn’t right for me?

Again, the list of available space is specific to each country, license and equipment tuning limitations however utilising either side of the UHF range can work, with the VHF (very high frequency) spectrum often making a good and practical backup solution.

The VHF band is classed as 30 – 300MHz, with a differentiation given between low and high VHF:

“Low-band VHF range of 49 MHz includes transmission of wireless microphones, cordless phones, radio controlled toys and more. A slightly higher VHF range of 54-72 MHz operates television channels 2-4, as well as wireless systems defined as “assistive listening.” VHF frequencies 76-88 MHz operate channels 5 and 6.

Band III is the name of the range of radio frequencies within the very high frequency (VHF) part of the electromagnetic spectrum from 174 to 240 megahertz (MHz). It is primarily used for radio and television broadcasting. It is also called high-band VHF, in contrast to Bands I and II.”

The Shure website explains the pro points of using the high-band VHF range, saying:

“The high-band VHF range is the most widely used for professional applications, and in which quality wireless microphone systems are available at a variety of prices. In the U.S., the high-band VHF range is divided into two bands available to wireless microphone users. The first band, from 169 – 172 MHz, includes eight specific frequencies designated by the FCC for wireless microphone use by the general public. These frequencies are often referred to as “traveling frequencies,” because they can theoretically be used throughout the U.S. without concern for interference from broadcast television. Legal limits of deviation (up to 12 kHz) allow high-quality audio transmission.”

Other than the UHF and VHF bands, if we look to the higher end of the spectrum the WiFi frequency range at 2.4GHz is another option, however, this also has its limitations due to it being a small shared space and the fact that a lot of WiFi networks in the area can cause interference.

So what does this mean in practical terms to get started?

If you are purchasing new wireless equipment it’s very important to understand its limitations in what frequencies you will be working with at any given venue, and this is multiplied tenfold if you intend to travel with the same equipment. Many modern receivers do not allow the tuning options to change ranges once they have been set – as previously mentioned, the UK channels 38 and 70 cannot be swapped once they have been tuned, and similarly, radio microphones that can tune to Channel 38 will not tune to the “Duplex Gap” of 823 – 832MHz or the shared space of 1785 – 1805MHz. This means that equipment needs have to be very well researched prior to purchasing and that pre-loved second-hand gear will need extra investigation for this reason.

What are the power restrictions for my wireless equipment? 

As a general rule the power must not be in excess of 50 milliwatts when operating in the television bands, and no more than 20 milliwatts when operating in the 600MHz band or the Duplex Gap.

So to recap what questions should I ask first to get setup?

To get started with wireless equipment the key starting questions are:

While it may seem like a lot of questions to ask and elements to consider, most wireless manufacturers will state the capabilities and limitations of their equipment, and keep you up to date with changes that may affect its’ use. With a bit of research and preparation, it’s possible to find wireless equipment to meet a variety of audio needs and budgets, that works within the law and sounds great wherever you may be.

 

How Do You Measure Career Success?

 

Written By: Erica D’Angelo

 

I’ve been working as an audio engineer in Australia since 1997.  When I meet young sound engineers starting out, particularly women,  I am inspired by their passion and fearless attitude towards forging a career in this very challenging industry, particularly in a country like Australia where the market for audio professionals is pretty small.  The same questions and fears that I faced working in a male-dominated industry still confront women today. I want to share my story to illustrate how a career in sound can take many paths, but the key ingredients for longevity are the same today as they were 25 years ago.

I started life as a classical musician completing a Bachelor of Music majoring in Clarinet in Adelaide, South Australia.  In the classical world, it was rare to come across audio reinforcement, only on the odd occasion of being part of a recording, but when I finished my studies and started gigging as a sax player and with pops style orchestras I started to notice the microphones, the miles of cables, and how there were only ever guys running the sound.  That all looked way cool compared to hanging out with the middle-aged people sitting in symphony orchestras.

Teaching music in the UK a few years later – the early 90’s, when techno/jungle and drum and bass were coming up from the underground in London,  kids were bringing cassette recordings to the class of pirate radio techno raves and asking how to make that music. I didn’t know, but I wanted to learn.  Each school had bits and pieces of gear – 4 track recorders, microphones, outboard gear, mixing desks – so I started to teach myself. Eventually, I enrolled in the SAE Diploma of Audio Engineering course and embarked on a new career at the age of 26.

Studying in London brought great opportunities.  I was resident FOH at my local Latin bar, mixing salsa bands that actually came from South America, which for an Australian was a new experience.  The guy that owned the PA became my first mentor – a generous Ghanaian man called Tito, who was happy to teach me everything he could. Other opportunities arose in film and TV but overwhelmingly I found live sound to be the most exciting.

Early days at Adelaide Festival

Eventually, the lure of the sun and the beach called me back home, where I landed in Sydney as an experienced live sound engineer, but with no idea of how the Australian industry operated.  I talked to every FOH engineer at every gig or festival I went to and found a lot of great advice was freely offered. It became clear that to be a working sound engineer in Australia you had to mix rock and roll.

So far, I had never crossed paths with any other women working in the industry.

I went and got a job with Jands – who at the time was the biggest PA company in Sydney that provided the production for all the touring acts, had a huge inventory of Clair Bros, Meyer, and JBL, and employed dozens of audio and lighting technicians for both touring and local shows.  Finally, I met two women employed as lighting techs, but working for Jands was as masculine an environment as was possible. These guys were (notoriously) hardcore roadies, in the days when OH&S concerns were laughed at, and you proved your worth by being able to lift the most, party the hardest and work the longest hours.

Despite the prevailing attitudes, I earned respect from my colleagues, and again was mentored by a couple of the guys, gaining invaluable training on large-scale sound reinforcement systems.   I liked the job, the loading in and out, the long hours, the banter – the physicality of it all. I learned quickly that you don’t need to be able to lift as much as the guys – there are plenty of other jobs to do and if everyone is working together no one has an issue.

Jands was going through one of their many restructure, which saw them sack all their permanent staff and invite back half of them as casuals.  I took this opportunity to get out and keep looking / learning. A couple of years in TV followed – working on entertainment shows which showcased live bands, as well as some very well-paid hell in the Shopping Network channels.

My next major achievement was being part of the Adelaide International Arts and Fringe Festivals for eight years.  These gigs were so much fun, working in crazy purpose-built venues, or parks, or rivers, working with international artists, working with crew from all over the country.  These festivals all involved miles and miles of cables, large-scale reinforcement systems, and logistical nightmares – requiring way more than audio skills. Operations and scheduling were organisational skills that I really enjoyed using and the beauty of these Arts festivals was more women on the crew!  The Australian rock and roll scene really was a man’s world for a very long time.

Schools Spectacular

Back in Sydney, the lead-up to the 2000 Olympic games was in full swing.  I was working as an assistant audio director with the Arts Unit of the education department – a department that provides large-scale performance opportunities for public school talent.  We would famously produce the Schools Spectacular every year in the Sydney Entertainment Centre – flying a massive PA, fully miked orchestra, dozens of radio mics, full broadcast split.

The Olympic Committee in conjunction with Norwest Audio was looking for as many large scale gigs as possible to practice fine-tuning the PA they would use for the opening ceremony, so our organisation was kept busy in the years leading up to the main event, staging events such as the Pacific School Games,  – so Norwest could get their specs right. During the actual Olympic event, I was seconded by the Olympic organising committee to be the Audio director of the Team Welcoming ceremonies. This was a great gig – pre-production involved going all over the country recording children’s choirs singing the national anthems of each country, that would be played at the ceremony each country has when they arrive in the athlete’s village.  For some countries, it is the only time they get to hear their anthem played.

In the 2000s I moved to Melbourne and started my own audio services business – Mind’s Eye Entertainment – writing and producing pop musicals for schools, sound design for theatre, and doing whatever else came along to pay the bills.   As much as I loved the life, I have always been an absolute realist with a pragmatic approach to survival, so when an opportunity to start up Staging Rentals in Melbourne came about, I took it. This was not audio, but it was events and high-end corporate events with large budgets.  So, I got to use the operations and logistics skills to full effect, as well as being account manager, truck driver and on the tools building stages as required. Again, the physicality of the job appealed to me. I met my husband at this time and loved working with him on gigs, but the inevitable question of age and babies started nagging.

I was 39 and decided to try and have a baby.  The male-dominated worlds I had been working in were not going to be conducive to a mature woman getting pregnant, so I changed jobs again and got an office job coordinating logistics in the exhibition industry.  At 41 after one round of IVF I produced a baby boy, and again was confronting how to shape my career while being a mother. A great fallback I always had was a teaching qualification, which I did as a matter of necessity way back when I finished uni.  I started looking for new career directions and discovered the world of vocational education – you could complete a Certificate III in sound production while at high school. This was an incredible discovery – all my qualifications and experience were perfect for a job like this.

Six months after having a baby I was teaching sound engineering to 16 and 17-year-old boys at a catholic school.  I loved it.

I continue delivering audio training to school kids at a private school in Melbourne. I teach part-time and spend the remaining time being technical production manager of the school’s large events. The school has a 1000 seat concert hall with flown Nexo array,  a multitude of incredible microphones and a generous production budget.

Discovering Soundgirls a couple of years ago was huge for me.  I was 50 and starting to question who I was – was I a sound engineer, a teacher, a manager?  I was middle-aged, grey hair, and still coiling cables – what kind of role model was I? How do you measure career success in the world of audio – FOH for a touring group?  Or simply having a workplace where you get to work and talk about audio all day?

After getting to meet the Melbourne Soundgirls and share stories, I found my personal story was finally validated – the fact that I am still here 25 years later thinking, talking, working and now inspiring young people to pursue a career in audio defines me as an audio professional.

New goals are to further my knowledge of acoustics.  I love the science of sound, how it behaves in a space, and I’ve taken very baby steps in studying acoustics – logarithms, logarithms, and more logarithms!

There Really Is No Such Thing As A Free Lunch

Using The Scientific Method in Assessment of System Optimization

A couple of years ago, I took a class for the first time from Jamie Anderson at Rational Acoustics where he said something that has stuck with me ever since. He said something to the effect of our job as system engineers is to make it sound the same everywhere, and it is the job of the mix engineer to make it sound “good” or “bad”.

The reality in the world of live sound is that there are many variables stacked up against us. A scenic element being in the way of speaker coverage, a client that does not want to see a speaker in the first place, a speaker that has done one too many gigs and decides that today is the day for one driver to die during load-in or any other myriad of things that can stand in the way of the ultimate goal: a verified, calibrated sound system.

The Challenges Of Reality

 

One distinction that must be made before beginning the discussion of system optimization is that we must draw a line here and make all intentions clear: what is our role at this gig? Are you just performing the tasks of the systems engineer? Are you the systems engineer and FOH mix engineer? Are you the tour manager as well and work directly with the artist’s manager? Why does this matter, you may ask? The fact of the matter is that when it comes down to making final evaluations on the system, there are going to be executive decisions that will need to be made, especially in moments of triage. Having clearly defined what one’s role at the gig is will help in making these decisions when the clock is ticking away.

So in this context, we are going to discuss the decisions of system optimization from the point of the systems engineer. We have decided that the most important task of our gig is to make sure that everyone in the audience is having the same show as the person mixing at front-of-house. I’ve always thought of this as a comparison to a painter and a blank canvas. It is the mix engineer’s job to paint the picture for the audience to hear, it is our job as system engineers to make sure the painting sounds the same every day by providing the same blank canvas.

The scientific method teaches the concept of control with independent and dependent variables. We have an objective that we wish to achieve, we assess our variables in each scenario to come up with a hypothesis of what we believe will happen. Then we execute a procedure, controlling the variables we can, and analyze the results given the tools at hand to draw conclusions and determine whether we have achieved our objective. Recall that an independent variable is a factor that remains the same in an experiment, while a dependent variable is the component that you manipulate and observe the results. In the production world, these terms can have a variety of implications. It is an unfortunate, commonly held belief that system optimization starts at the EQ stage when really there are so many steps before that. If there is a column in front of a hang of speakers, no EQ in the world is going to make them sound like they are not shadowed behind a column.

Now everybody take a deep breath in and say, “EQ is not the solution to a mechanical problem.” And breathe out…

Let’s start with preproduction. It is time to assess our first round of variables. What are the limitations of the venue? Trim height? Rigging limitations? What are the limitations proposed by the client? Maybe there is another element to the show that necessitates the PA being placed in a certain position over another; maybe the client doesn’t want to see speakers at all. We must ask our technical brains and our career paths in each scenario, what can we change and what can we not change? Note that it will not always be the same in every circumstance. In one scenario, we may be able to convince the client to let us put the PA anywhere we want, making it a dependent variable. In another situation, for the sake of our gig, we must accept that the PA will not move or that the low steel of the roof is a bleak 35 feet in the air, and thus we face an independent variable.

The many steps of system optimization that lie before EQ

 

After assessing these first sets of variables, we can now move into the next phase and look at our system design. Again, say it with me, “EQ is not the solution to a mechanical problem.” We must assess our variables again in this next phase of the optimization process. We have been given the technical rider of the venue that we are going to be at and maybe due to budgetary restraints we cannot change the PA: independent variable. Perhaps we are carrying our own PA and thus have control over the design with limitations from the venue: dependent variable forms, but with caveats. Let’s look deeper into this particular scenario and ask ourselves: as engineers building our design, what do we have control over now?

The first step lies in what speaker we choose for the job. Given the ultimate design control scenario where we get the luxury to pick and choose the loudspeakers we get to use in our design, different directivity designs will lend themselves better in one scenario versus another. A point source has just as much validity as the deployment of a line array depending on the situation. For a small audience of 150 people with a jazz band, a point source speaker over a sub may be more valid than showing up with a 12 box line array that necessitates a rigging call to fly from the ceiling. But even in this scenario, there are caveats in our delicate weighing of variables. Where are those 150 people going to be? Are we in a ballroom or a theater? Even the evaluation of our choices on what box to choose for a design are as varied as deciding what type of canvas we wish to use for the mix engineer’s painting.

So let’s create a scenario: let’s say we are doing an arena show and the design has been established with a set number of boxes for daily deployment with an agreed-upon design by the production team. Even the design is pretty much cut and paste in terms of rigging points, but we have varying limitations to trim height due to high and low steel of the venue. What variables do we now have control over? We still have a decent amount of control over trim height up to a (literal) limit of the motor, but we also have control over the vertical directivity of our (let’s make the design decision for the purpose of discussion) line array. There is a hidden assumption here that is often under-represented when talking about system designs.

A friend and colleague of mine, Sully (Chris) Sullivan once pointed out to me that the hidden design assumption that we often make as system engineers, but don’t necessarily acknowledge, is that we assume that the loudspeaker manufacturer has actually achieved the horizontal coverage dictated by technical specifications. This made me reconsider the things I take for granted in a given system. In our design, we choose to use Manufacturer X’s 120-degree line source element. They have established in their technical specs that there is a measurable point at 60 degrees off-axis (total 120-degree coverage) where the polar response drops 6 dB. We can take our measurement microphone and check that the response is what we think it is, but if it isn’t what really are our options? Perhaps we have a manufacturer defect or a blown driver somewhere, but unless we change the physical parameters of the loudspeaker, this is a variable that we put in the trust of the manufacturers. So what do we have control over? He pointed out to me that our decision choices lie in the manipulation of the vertical.

Entire books and papers can and have been written about how we can control the vertical coverage of our loudspeaker arrays, but certain factors remain consistent throughout. Inter-element angles, or splay angles, let us control the summation of elements within an array. Site angle and trim height let us control the geometric relationship of the source to the audience and thus affect the spread of SPL over distance. Azimuth also gives us geometric control of the directivity pattern of the entire array along a horizontal dispersion pattern. Note that this is a distinction from the horizontal pattern control of the frequency response radiating from the enclosure, of which we have handed responsibility over to the manufacturer. Fortunately, the myriad of loudspeaker prediction software available from modern manufacturers has given the modern system engineer an unprecedented level of ability to assess these parameters before a single speaker goes up into the air.

At this point, we have made a lot of decisions on the design of our system and weighed the variables along every step of the way to draw out our procedure for the system deployment. It is now time to analyze our results and verify that what we thought was going to happen did or did not happen. Here we introduce our tools to verify our procedure in a two step-process of mechanical then acoustical verification. First, we use tools such as protractors and laser inclinometers as a means of collecting data to assess whether we have achieved our mechanical design goal. For example, our model says we need a site angle of 2 degrees to achieve this result so we verify with the laser inclinometer that we got there. Once we have assessed that we made our design’s mechanical goals, we must analyze the acoustical results.

Laser inclinometers are just one example of a tool we can use to verify the mechanical actualization of a design

.

It is here only at this stage that we are finally introducing the examination software to analyze the response of our system. After examining our role at the gig, the criteria involved in pre-production, choosing design elements appropriate for the task, and verifying their deployment, only now can move into the realm of analysis software to see if all those goals were met. We can utilize dual-channel measurement software to take transfer functions at different stages of the input and output of our system to verify that our design goals have been met, but more importantly to see if they have not been met and why. This is where our ability to critically interpret the data comes in to play. By evaluating impulse response data, dual-channel FFT (Fast-Fourier Transform) functions, and the coherence of our gathered data we can make an assessment of how our design has been achieved in the acoustical and electronic realm.

What’s interesting to me is that often the discussion of system optimization starts here. In fact, as we have seen, the process begins as early as the pre-production stage when talking with different departments and the client, and even when asking ourselves what our role is at the gig. The final analysis of any design comes down to the tool that we always carry with us: our ears. Our ears are the final arbiters after our evaluation of acoustical and mechanical variables, and are used along every step of our design path along with our trusty use of  “common sense.” In the end, our careful assessment of variables leads us to utilize the power of the scientific method to make educated decisions to work towards our end goal: the blank canvas, ready to be painted.

Big thanks to the following for letting me reference them in this article: Jamie Anderson at Rational Acoustics, Sully (Chris) Sullivan, and Alignarray (www.alignarray.com)

Gain Without the Pain

 

Gain Structure for Live Sound Part 1

Gain structure and gain staging are terms that get thrown about a lot, but often get skimmed over as being obvious, without ever being fully explained. The way some people talk about it, and mock other people for theirs, you’d think proper gain structure was some special secret skill, known only to the most talented engineers. It’s actually pretty straightforward, but knowing how to do it well will save you a lot of headaches down the line. All it really is is setting your channels’ gain levels high enough that you get plenty of signal to work with, without risking distortion. It often gets discussed in studio circles, because it’s incredibly important to the tone and quality of a recording, but we have other things to consider on top of that in a live setting.

So, what exactly is gain?

It seems like the most basic question in sound, but the term is often misunderstood. Gain is not simply the same as volume. It’s a term that comes from electronics, which refers to the increase in amplitude of an incoming signal when you apply electricity to it. In our case, it’s how much we change our input’s amplitude by turning the gain knob. In analogue desks, that means engaging more circuits in the preamp to increase the gain as you turn (have you ever used an old desk where you needed just a bit more level, so you slowly and smoothly turned the gain knob, it made barely any difference… nothing… nothing… then suddenly it was much louder? It was probably because it crossed the threshold to the next circuit being engaged).

Digital desks do something similar but using digital signal processing. It is often called trim instead of gain, especially if no actual preamp is involved. For example, many desks won’t show you a gain knob if you plug something into a local input on the back of it, because its only preamps are in its stagebox. You will see a knob labelled trim instead (I do know these knobs are technically rotary encoders because they don’t have a defined end point, but they are commonly referred to as knobs. Please don’t email in). Trim can also be used to refer to finer adjustments in the input’s signal level, but as a rule of thumb, it’s pretty much the same as gain. Gain is measured as the difference between the signal level when it arrives at the desk to when it leaves the preamp at the top of the channel strip, so it makes sense that it’s measured in decibels (dB), which is a measurement of ratios.

The volume of the channel’s signal once it’s gone through the rest of the channel strip and any outboard is controlled by the fader. You can think of the gain knob as controlling input, and the fader as controlling output (let’s ignore desks with a gain on fader feature. They make it easier for the user to visualise the gain but the work is still being done at the top of the channel strip).

Now, how do you structure it?

For studio recording, the main concern is getting a good amount of signal over the noise floor of all the equipment being used in the signal chain. Unless you’re purposefully going for a lo-fi, old-school sound, you don’t want a lot of background hiss all over your tracks. A nice big signal-to-noise ratio, without distortion, is the goal. In live settings, we can view other instruments or stray noises in the room as part of that noise floor, and we also have to avoid feedback at the other end of the scale. There are two main approaches to setting gains:

Gain first: With the fader all the way down, you dial the gain in until it’s tickling the yellow or orange LEDs on your channel or PFL while the signal is at its loudest, but not quite going into the red or ‘peak’ LEDs (of course, if it’s hitting the red without any gain, you can stick a pad in. You might find a switch on the microphone, instrument or DI box, and the desk. If the mic is being overwhelmed by the sound source it’s best to use its internal pad if it has one, so it can handle it better and deliver a distortion-free signal to the desk). You then bring the fader up until the channel is at the required level. This method gives you a nice, strong signal. It also gives that to anyone sharing the preamps with you, for example, monitors sharing the stagebox or multitrack recording. However, because faders are measured in dBs, which are logarithmic, it can cause some issues. If you look at a fader strip, you’ll see the numbers get closer together the further down they go. So if you have a channel where the fader is near the bottom, and you want to change the volume by 1dB, you’d have to move it about a millimetre. Anything other than a tiny change could make the channel blaringly loud, or so quiet it gets lost in the mix.

Fader at 0: You set all your faders at 0 (or ‘unity’), then bring the gain up to the desired level. This gives you more control over those small volume changes, while still leaving you headroom at the top of the fader’s travel. It’s easier to see if a fader has been knocked or to know where to return a fader to after boosting for a solo, for example. However, it can leave anyone sharing gains with weak or uneven signals. If you’re working with an act you are unfamiliar with, or one that is particularly dynamic, having the faders at zero might not leave you enough headroom for quieter sections, forcing you to have to increase the gain mid-show. This is far from ideal, especially if you are running monitors, because you’re changing everyone’s mix without being able to hear those changes in real-time, and increasing the gain increases the likelihood of feedback. In these cases, it might be beneficial to set all your faders at -5, for example, just in case.

In researching this blog, I found some people set their faders as a visual representation of their mix levels, then adjust their gains accordingly. It isn’t a technique I’ve seen in real life, but if you know the act well and it makes sense to your workflow, it could be worth trying. Once you’ve set your gates, compressors, EQ, and effects, and added the volume of all the channels together you’ll probably need to go back to adjust your gains or faders again, but these approaches will get you in the right ballpark very quickly.

All these methods have their pros and cons, and you may want to choose between them for different situations. I learned sound using the first method, but I now prefer the second method, especially for monitors. It’s clear where all the faders should sit even though the sends to auxes might be completely different, and change song to song. Despite what some people might say, there is no gospel for gain structure that must be followed. In part 2 I’ll discuss a few approaches for different situations, and how to get the best signal-to-noise ratio in those circumstances. Gain structure isn’t some esoteric mystery, but it is important to get right. If you know the underlying concepts you can make informed decisions to get the best out of each channel, which is the foundation for every great mix.

 

Whose Job is It? When Plug-in Effects are Sound Design vs. Mix Choices.

We’ve reached out to our blog readership several times to ask for blog post suggestions.  And surprisingly, this blog suggestion has come up every single time. It seems that there’s a lot of confusion about who should be processing what.  So, I’m going to attempt to break it down for you.  Keep in mind that these are my thoughts on the subject as someone with 12 years of experience as a sound effects editor and supervising sound editor.  In writing this, I’m hoping to clarify the general thought process behind making the distinction between who should process what.  However, if you ever have a specific question on this topic, I would highly encourage you to reach out to your mixer.

Before we get into the specifics of who should process what, I think the first step to understanding this issue is understanding the role of mixer versus sound designer.

UNDERSTANDING THE ROLES

THE MIXER

If we overly simplify the role of the re-recording mixer, I would say that they have three main objectives when it comes to mixing sound effects.  First, they must balance all of the elements together so that everything is clear and the narrative is dynamic.  Second, they must place everything into the stereo or surround space by panning the elements appropriately.  Third, they must place everything into the acoustic space shown on screen by adding reverb, delay, and EQ.

Obviously, there are many other things accomplished in a mix, but these are the absolute bullet points and the most important for you to understand in this particular scenario.

THE SOUND DESIGNER

The sound designer’s job is to create, edit, and sync sound effects to the picture.


BREAKING IT DOWN

EQ

It is the mixer’s job to EQ effects if they are coming from behind a door, are on a television screen, etc.  Basically, anything where all elements should be futzed for any reason.  If this is the case, do your mixer a favor and ask ahead of time if he/she would like you to split those FX out onto “Futz FX” tracks. You’ll totally win brownie points just for asking.  It is important not to do the actual processing in the SFX editorial, as the mixer may want to alter the amount of “futz” that is applied to achieve maximum clarity, depending on what is happening in the rest of the mix.

It is the sound designer’s job to EQ SFX if any particular elements have too much/too little of any frequency to be appropriate for what’s happening on screen.  Do not ever assume that your mixer is going to listen to every single element you cut in a build, and then individually EQ them to make them sound better.  That’s your job!  Or, better yet, don’t choose crappy SFX in the first place!

REVERB/DELAY

It is the mixer’s job to add reverb or delay to all sound effects when appropriate in order to help them to sit within the physical space shown on screen.  For example, he or she may add a bit of reverb to all sound effects which occur while the characters on screen are walking through an underground cave.  Or, he or she may add a bit of reverb and delay to all sound effects when we’re in a narrow but tall canyon.  The mixer would probably choose not to add reverb or delay to any sound effects that occur while a scene plays out in a small closet.

As a sound designer, you should be extremely wary of adding reverb to almost any sound effect.  If you are doing so to help sell that it is occurring in the physical space, check with your mixer first.  Chances are, he or she would rather have full control by adding the reverb themselves.

Sound designers should also use delay fairly sparingly.  This is only a good choice if it is truly a design choice, not a spatial one.  For example, if you are designing a futuristic laser gun blast, you may want to add a very short delay to the sound you’re designing purely for design purposes.

When deciding whether or not to add reverb or delay, always ask yourself whether it is a design choice or a spatial choice.  As long as the reverb/delay has absolutely nothing to do with where the sound effect is occurring, you’re probably in the clear.  But, you may still want to supply a muted version without the effect in the track below, just in case, your mixer finds that the affected one does not play well in the mix.

COMPRESSORS/LIMITERS

Adding compressors or limiters should be the mixer’s job 99% of the time.

The only instance in which I have ever used dynamics processing in my editorial was when a client asked to trigger a pulsing sound effect whenever a particular character spoke (there was a visual pulsing to match).  I used a side chain and gate to do this, but first I had an extensive conversation with my mixer about if he would rather I did this and gave him the tracks, or if he would prefer to set it up himself.  If you are gating any sound effects purely to clean them up, then my recommendation would be to just find a better sound.

PITCH SHIFTING

A mixer does not often pitch shift sound effects unless a client specifically asks that he or she do so.

Thus, pitch shifting almost always falls on the shoulders of the sound designer.  This is because when it comes to sound effects, changing the pitch is almost always a design choice rather than a balance/spatial choice.

MODULATION

A mixer will use modulation effects when processing dialogue sometimes, but it is very uncommon for them to dig into sound effects to use this type of processing.

Most often this type of processing is done purely for design purposes, and thus lands in the wheelhouse of the sound designer.  You should never design something with unprocessed elements, assuming that your mixer will go in and process everything so that it sounds cooler.  It’s the designer’s job to make all of the elements as appropriate as possible to what is on the screen.  So, go ahead and modulate away!

NOISE REDUCTION

Mixers will often employ noise reduction plugins to clean up noisy sounds.  But, this should never be the case with sound effects, since you should be cutting pristine SFX in the first place.

In short, neither of you should be using noise reduction plugins.  If you find yourself reaching for RX while editing sound effects, you should instead reach for a better sound! If you’re dead set on using something that, say, you recorded yourself and is just too perfect to pass up but incredibly noisy, then by all means process it with noise reduction software.  Never assume that your mixer will do this for you.  There’s a much better chance that the offending sound effect will simply be muted in the mix.


ADDITIONAL NOTES

INSERTS VS AUDIOSUITE

I have one final note about inserts versus AudioSuite plug-in use.  Summed up, it’s this: don’t use inserts as an FX editor/sound designer.  Always assume that your mixer is going to grab all of the regions from your tracks and drag them into his or her own tracks within the mix template.  There’s a great chance that your mixer will never even notice that you added an insert.  If you want an effect to play in the mix, then make sure that it’s been printed to your sound files.

AUTOMATION AS EFFECTS

In the same vein, it’s a risky business to create audio effects with automation, such as zany panning or square-wave volume automation.  These may sound really cool, but always give your mixer a heads up ahead of time if you plan to do something like this.  Some mixers automatically delete all of your automation so that they can start fresh.  If there’s any automation that you believe is crucial to the design of a sound, then make sure to mention it before your work gets dragged into the mix template.

Mix Messiah – Leslie Gaston-Bird

Leslie Gaston-Bird is a freelance re-recording mixer and sound editor, and owner of Mix Messiah Productions. She is currently based in Brighton, England and is the author of the book “Women in Audio“. She is a voting member of The Recording Academy and sits on these AES committees: Board of Governors, Awards, Conference Policy, Convention Policy, Education, Membership, and Co-Chairs the Diversity & Inclusion Committee with Piper Payne. She was a tenured Associate Professor of Recording Arts at the University of Colorado Denver. Leslie also is Co-Director for SoundGirls U.K. Chapter and SoundGirls Scholarships and Travel Grants. She has worked in the industry for over 30 years.

Leslie has done research into audio for planetariums, multichannel audio on Blu-Ray, and a comparison of multichannel codecs that was published in the AES Journal (Gaston, L. and Sanders, R. (2008), “Evaluation of HE-AAC, AC-3, and E-AC-3 Codecs”, Journal of the Audio Engineering Society of America, 56(3)). She frequently presents at AES conferences and conventions.

She has been working in the industry for over 30 years: 12 years in public radio, 17 in sound for picture, and 13 years as an educator (some of these years overlap). Her interest in sound for film was sparked by seeing Leslie Ann Jones on the cover of Mix Magazine in the 1980s. She attended Indiana University Bloomington and graduated with an A.S. in Audio Technology and a B.A. in Telecommunications. While she was at Indiana University Bloomington, she signed up for a work-study job as a board operator at the campus radio station, WFIU-Bloomington. This gave her the skills she needed for her first job, which was at National Public Radio in Washington, D.C.

Leslie worked at NPR from 1991-1995 as their audio systems manager. She recorded and edited radio pieces and did a ton of remote recording and interviews on DAT tape.  (Who remembers DAT tape?) From NPR she went on to work for Colorado Public Radio as their Audio Systems Manager.

Although Leslie loved working for both NPR and Colorado Public Radio, her passion was sound for film, and it was not easy for her to get her foot in the door.  It took her over four years to find someone who would take a chance on her. Her gratitude for this opportunity goes to Patsy Butterfield, David Emrich, and Chuck Biddlecom at Post Modern Company in Denver.

Leslie still works as a freelancer in Film Sound and has currently been working on several horror films and thrillers.  “For some reason, I keep getting horror films to work on. I recently did the sound for Leap of Faith, a documentary about The Exorcist which has been selected for the Sundance Film Festival in 2020. Also coming out is A Feral World, a post-apocalyptic tale of survival about a young boy who befriends the mother of a missing girl. It’s not a horror film but there are a few violent scenes. I also did sound for Doc of the Dead, a documentary about zombies and zombie culture. The plot for the current film I’m working on, Rent-A-Pal, is one I’m not at liberty to disclose, but suffice it to say there’s a pattern here. However, I have also done some great documentaries focused on peace and harmony, too! Three Worlds, One Stage featured a woman directing/producing team (Jessica McGaugh and Roma Sur of Desert Girl Films) and told the story of three people from different cultures who moved to the United States and choreographed a dance together, and Enough White Teacups (directed by Michelle Carpenter) which explores the winners of the Index design awards which recognize innovations designed to improve the human condition. Michelle also did Klocked, a story of a mother-daughter-daughter motorcycle racing team. I’m proud to have worked on these woman-powered projects.”

While Leslie was working at Post Modern at night, she was also pursuing a Masters Degree and her professors encouraged her to apply for a teaching position.  She did and ended up as a tenured professor at the University of Colorado Denver, where she taught until 2018 when she relocated to Brighton, England. She was also encouraged by her professors the late Rich Sanders and Roy Pritts to join the AES where she became heavily involved.

“It has opened so many doors. I met Dave Malham at an AES convention in San Francisco and he ended up being my sponsor for a Fulbright Award at the University of York, England. I have done lots with AES, from being secretary of my local section to chair, then Western Region VP and Governor. In 2016 Piper Payne helped me to start the Diversity and Inclusion committee which we co-chair. We have come a long way, most recently partnering with Dr. Amandine Pras at the University of Lethbridge for their “Microaggressions in the Studio” survey. I’m really proud of the changes we have made, the AES Convention in New York was proof of our impact, with high visibility of women and underrepresented groups on panels, presenting papers and workshops, and even in the exhibit floor. In my 15 years of attending conferences I’ve never seen anything like it and we received so much positive feedback. We have more work to do but we have every reason to be proud of these accomplishments.”

In 2018, Leslie and her family relocated to Brighton, England, to be closer to her husband’s family (he is British) and it looks like they will be there for the foreseeable future. In addition to running her own business, her work with AES, writing Women in Sound, a (did we mention she is starting a Ph.D.?) Leslie is the mother of two children.  She balances it all by being highly organized and managing her time well. She says, “Somewhere I read that mothers of siblings are more productive. I think it’s because you have to be focused when you work. I think to myself, “okay, I only have 3 hours to do x-y-x” and I’m on it! No time to procrastinate! It’s not easy but in ways, it’s better because you learn the value of budgeting time and focusing on the task at hand.”

Leslie has a book coming out in December, Women in Audio and she shares the experience of writing it and the importance of it:

“More than anything, I hope this book is a testament to my commitment and indebtedness to the women who have trusted me with their stories. I must say, I have been nervous at times because the weight of these stories is truly immense; women whose stories might otherwise go untold are brought to light here. I have found so many pioneering women throughout history: inventors, record producers, acousticians I’ve tried to cover every field of audio I could. Altogether there are around 100 profiles. It’s really a must-have for women and girls seeking inspiration; for schools who want to add diversity to their curriculum (I took care to seek out women from all over the globe); for professionals who may think they’re the only woman in their area of expertise. I also talk about role models, mentoring, and networking. I’m really looking forward to sharing it with everyone!”

With a career spanning over 30 years, working in several roles as Educator, Mixer, Musician/Talent, Production Sound Mixer/Sound Recordist, Recording Engineer, Re-Recording Mixer, Researcher, Sound Supervisor, and Author; you would think Leslie is ready to rest on her laurels, but no, in 2020 at the age of 51 she will begin her Ph.D. at the University of Surrey.

What do you like best about working in Film Sound?

What I like most about working on films is the meditative rhythm of finding and selecting sounds, shaping the sounds, and giving the film a sense of realism.

What do you like least?

The thing I like least is computer crashes. It’s the rise of the machines – they are training us.

What is your favorite day off activity?

Hanging out with my kids.

What are your long term goals?

I have written a book on Women in Audio, which I hope to follow up with another volume. There are so many amazing women in Women in Audio: 1st Edition (Paperback) all sorts of audio fields, and it is an honor to share their stories. I would also like to continue supporting women to travel to and attend conferences with the fund I set up with SoundGirls.

What if any obstacles or barriers have you faced?

Moving to England and leaving a tenured position at a university was equal parts confidence and insanity. I have always believed in risks, but at age 50 I still feel the need to prove myself. I’m planning to start a Ph.D., but I have a feeling that women – more than their male counterparts – feel the need to seek higher academic qualifications in order to compete in the job market. It’s something I hope will change.

How have you dealt with them?

Well, by applying for a Ph.D.  I’ve been accepted at the University of Surrey and will start in 2020, the year I turn 51.

The advice you have for other women and young women who wish to enter the field?

Stay versatile and stay connected.

Must have skills?

You can always train your ears and learn the equipment, but the most valuable skills are creativity, diplomacy and client service.

Favorite gear?

Loudspeakers: Genelec, PMC. Preamps: Grace, Neve 5012.

Parting Words:

I suppose one thing I’d like readers to know about is a moment I had recently, standing in my dining room, looking over some pictures that I had received from a man named Dana Burwell. The pictures were of Joan Lowe, a recording engineer that worked on some feminist albums in the 1970s (The Changer and the Changed, among others). Joan Lowe did not have family, and these pictures were entrusted to me for the purposes of writing the book, Women in Audio. The only reason Dana knew me was because I had reached out to Joan in November to interview her for the book. Joan had emailed me answers to my questions but passed away in February. If I hadn’t been in touch with Joan, I wonder what would Dana have done with those photos?

So there I was, standing in the living room, with pictures of a very friendly woman who I just met, who shared her story with me – and who trusted me with her story – and who passed away a short while later. I now had the duty to share her story.  It’s a responsibility I haven’t taken lightly. On that day it happened to be sunny. I looked up at the sky, and thanked Joan, with an expression on my face that was a combination of awestruck and joyful. I continued writing with a renewed passion that day. Something else in me changed, too, but I’ll leave that for another interview.  In the meantime, it’s an honor and a privilege to bring these stories to our audio community.

More on Leslie

Find More Profiles on The Five Percent

Profiles of Women in Audio

X