Empowering the Next Generation of Women in Audio

Join Us

Keeping it Real – Section 2

This is Section 2 of Becky Pell’s 3 Section Article on Using psychoacoustics in IEM mixing and the technology that takes it to the next level. Section 1

Acoustic Reflex Threshold

Have you ever noticed how you and the band can take a break from rehearsing, come back half an hour later, and when put your ears back in everything feels louder? And then how after a few moments it settles down and feels normal again? It’s because of a reflex action of the stapedius muscle in the middle ear. When this little muscle contracts, it pulls the stapes or ‘stirrup bone’ slightly away from the oval window of the cochlea, against which it normally vibrates to transmit pressure waves to be converted into nerve impulses. This action, which is a response to sounds of between 70-100dB SPL, effectively creates a compression effect resulting in a 20dB reduction in what you hear. However, the muscle can’t stay fully contracted for long periods, so after a few seconds, the tension drops to around 50% of the maximum. Whilst the initial reaction, at 150 milliseconds, is not fast enough to fully protect the ear against very loud and sudden transient sounds, it helps in reducing hearing fatigue over longer periods. Interestingly this reflex also occurs when a person vocalises, which helps to explain why a singer’s in-ear mix of the band might sound loud enough in isolation, but when they start singing they find they need more instrumentation. This happens in conjunction with the fact they are hearing themselves not only via the mix but through the bone conductivity of their skull. It’s well worth trying to sing along to an IEM mix that you’ve prepared for a singer to experience what this feels like for them because it’s a very different sensation from simply shouting down the mic to EQ it.

The acoustic reflex threshold also means that transients appear quieter than sustained sounds of the same level, and it’s the thinking behind a compression trick that is often used in studios and film production. When you compress the decay of a short sound such as a drum hit, it fools the brain into thinking the drum hit as a whole is significantly louder and punchier than it is, although the peak level – the transient – has not changed. Personally, I’d advocate caution if you’re going to try this in a monitor mix – the drummer needs to hear what their drums ACTUALLY sound like, and getting things such as drum tuning and mic placement correct at source are vital – but it’s an interesting thing to be aware of.

All in the timing

Our ability to perceive sounds as separate events is not only dependent on there being sufficient difference between them in frequency, but also on timing. This phenomenon is known as the ‘precedence effect’ and the ‘Haas effect.’

These effects describe how when two identical sounds are presented in quick succession, they are heard as a single sound. This perception occurs when the delay between the two sounds is between 1 to 5 ms for single click sounds, but up to 40 ms for more complex sounds such as piano music. When the lag is longer, the second sound is heard as an echo. A single reflection arriving within 5 to 30 ms can be up to 10 dB louder than the direct sound without being perceived as a distinct event. In 1951 Helmut Haas examined how the perception of speech is affected in the presence of a single reflection. He discovered that a reflection arriving later than 1 ms after the direct sound increases the perceived level and spaciousness (more precisely, the perceived width of the sound source), without being heard as a separate sound. This holds true up to around 20ms, at which point the sounds become distinguishable.

This can be an interesting experiment to try with a vocal mic and your IEMs. If you split the vocal mic down two channels, and delay one input somewhere between 1 and 20 ms, see what you notice. Then try panning one input hard left and the other hard right, and see how the vocal sounds thicker and creates a sense of width and space. Play with the delay time, and you’ll see that if it’s too short the signal starts to phase; too long and you lose the illusion. This game does make the signal susceptible to comb-filtering if you sum the inputs back to mono, especially at shorter delay times, so be aware of that.

Once again I would advocate extreme caution if you intend to use this in a monitor mix, as ‘tricking’ a singer in this way can backfire! However it’s a useful principle to be aware of if you have the opportunity to get creative with other sounds, and I use it a lot when adding pre-delay to a reverb – try it for yourself. No pre-delay creates a feeling of immediacy to the effect, but just 5-10ms creates a slight sense of space. If you’re after a little more breathiness and drama – ‘vampires swirling’ as I once heard it described – try increasing the pre-delay up to 20 ms and feel how it changes.

The Haas effect is also something to be very aware of for IEM mixing when it comes to digital latency. Every time we take a signal out of the console and send it somewhere else in the digital domain, a degree of minor time delay known as latency is introduced. Different processing devices introduce different amounts of latency, and obviously the less, the better. The more devices we add, the more the latency stacks up. Whilst a few milliseconds of latency may be totally imperceptible for, say, a guitarist; it’s a different matter when it comes to vocals. A singer will often be able to perceive something as being not quite right, without being able to put their finger on it, because when we vocalise and have that signal returned to our ears, the discrepancy between what we hear at the moment of making the sound, and the moment of it returning, becomes heightened in our awareness. Something to be vigilant about when dealing with any digital outboard such as plug-ins, for a singer.

Location Services

The Haas effect also affects where we perceive a sound to be coming from – the supposed location of the source is determined by the sound which arrives first, even though the sounds may be from two different physical locations. This holds true until the second sound is around 15dB louder than the first when the perception of direction changes.

Sound localisation is a very complex mechanism performed by the human brain. It’s not only dependent on the directional cues received by the ears, but it is also intertwined with the other senses, especially vision and proprioception. Our ability to determine a sound’s location and distance is called binaural hearing, and in addition to all the psychoacoustic effects discussed so far, it is also heavily influenced by the physical shape of our heads, ears, and even torsos. The outer ear or ‘pinna’ functions as a directional sound collector which funnels sound waves into the ear canal. The head and the topography of our face and torso influence how sounds from any position other than a 0° angle are heard, as they create an acoustic ‘shadow.’ Our brains process the differences between the information that our two ears collect, and interpret the results to determine where a sound is coming from, how far away it is, and whether it’s still or moving. At lower frequencies, below about 2kHz, this is mostly determined by the inter-aural time difference; that is, the discrepancy in time between when the sound reaches each ear. Above 2k the information gathered comes from the inter-aural level difference; that is, the discrepancy in volume between the sound that each ear hears. This clever evolutionary adaptation is due to the relative lengths of sound waves at different frequencies. For frequencies below 800 Hz, the dimensions of the head are smaller than the half wavelength of the sound waves so that the brain can determine phase delays between the ears.

However, for frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves, so a determination of direction based on phase alone is not possible at higher frequencies; instead, we rely on the level difference between the two ears. These binaural disparities are known as Duplex theory and play an important role for sound localisation in the horizontal plane.

(As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound’s lateral source because the phase difference between the ears becomes too small for a directional evaluation, hence the experience of sub-bass frequencies being omnidirectional.)

Whilst this phenomenon makes it easy to sense which side a sound is coming from, it’s harder to determine direction in the up/down and front/back planes, due to our ears being placed at the same horizontal level as each other. Some types of owl have their ears placed at different heights, to allow for greater efficiency in finding prey when hunting at night, but humans have no such facility. This can result in ‘cones of confusion’, where we are unsure as to the elevation of a sound source because all sounds that lie in the mid-sagittal plane have similar inter-aural differences; however, once again the shapes of our bodies help us out. Imagine a sound source is right in front of you. There is a certain detour the torso reflection takes and hence a certain difference of this torso reflection in relation to the direct sound arriving at both ears. This yields a slight comb filter pattern which will change if you elevate this source. The same is true if this source is now moved behind you; the torso reflection changes and our brains process the information discrepancies to help us locate the source.

Next time: In the third and final section of this series on using psychoacoustics to enhance your monitor mixing, we’ll discover a ground-breaking new technology that takes IEMs to a whole new dimension.

University of Crash and Burn – Rebecca Wilson

Rebecca Wilson is an industry veteran, working in live sound for over 25 years. Touring solely as a monitor engineer, except for a brief stint as FOH Engineer for Flemming and John (while out with Ben Folds), she has worked for Sound Image, at Humphrey’s by the Bay and toured with Nanci Griffith, Wilco, The Bangles, and more. Rebecca is based in New York, and after finishing a 15-month project for Tibet House US, the Cultural Center for the Dalai Lama in NYC has moved into the position of head audio engineer for the TED headquarters and TED World Theatre.

Rebecca grew up in ‘cow-town Colorado and says she had limited exposure to the arts. She took piano lessons from the age of five until age thirteen when her musical training was stopped short: “I got kicked out of the piano school for pressing a million short yellow pencils into the styrofoam ceiling of the composition room. I thought they were neat wooden stalactites, but my teacher didn’t.”

When asked about what drew her to audio, Rebecca says, jokingly, “hot band guys of course.” In fact, she always had a thing for audio, spending hours using her father’s voice recorder as a child. “I’d steal the batteries from my friends’ TV remotes for it and record endless stories, listen back, and then re-record them.” When she got the chance to push her first fader, she became love-struck with the idea of being the moderator for people’s sonic experience. “Plus, getting DJ rights between sets is fun.”

Like so many in the music industry, Rebecca fell into it by chance. During her Freshman summer break, she lived with her brother in Hawaii and worked as a cashier at the Hard Rock Cafe. It was there she met a roadie from KC and the Sunshine Band, and she asked him how she could get his job. He told her, if she could leave right now, she could have his. He mixed FOH, and they talked. Since she was attending college, he suggested she work for their campus performing arts center; learn sound, and get paid.

Rebecca answered a projectionist ad that fall when she returned to Colorado State University. It turned out to be more: the performing arts center which did the screenings had just purchased a brand-new PA in components. “I learned to use tools at that job, cut holes for the enclosures, load drivers, and wire the new racks.” She started to learn about signal path and processing and got to mix her first band, punk band Seven Seconds, in the university’s beer basement. “It was so awful and riddled with monitor feedback they stopped mid-show and ask if anyone in the audience knew how to run sound because it was obvious that she (and pointed at me) had no idea. Pin drop. I was mortified and totally hooked on understanding audio.”

After getting her BA from CSU in Communications, Rebecca worked for local venues as a stagehand, pushing speaker cabinets and loading trucks. She requested to assist the audio crew and then asked enough questions to drive the engineers crazy. She then got work at local sound companies where she attended the ‘University of Crash and Burn and Get Up Again.’ “My first real boss was really drunk all the time, and he’d let me mix even though I didn’t know what I was doing.” Rebecca has never taken a live sound class and does not support expensive schools that spit you out without real experience. She says, “all you need to know you can learn by starting at the bottom and can get paid for it.”

A year after school, Rebecca saved enough money, from stage-handing and freelance audio work, to move. She packed up her 20-year old Toyota Tercel and drove from Denver to San Diego. “I wanted to learn to surf, [it] seemed like a good plan − I’ve never liked wearing shoes.” After a week on a friend’s couch, she found a house and job working freelance for a large corporate production company Meeting Services Inc. doing hotel and AV work. She then got a house gig at a venue called ‘4th&B’ in San Diego, and it was there she met Fishbone. “They came through, liked what I did, and I left the following week on my first tour.”

Touring through the South at 21 years old opened Rebecca’s eyes. On a bus with ten black guys and one white guy, she had a gas station attendant refuse to sell her cigarettes. “He said they were sold-out of my brand while I pointed to a pack of them behind the counter. He said, ‘those aren’t for sale.’ I had no idea what was going on.” When she told the band what had happened, they semi-laughed. “You got off the bus with us,” Angelo explained. The band told her that in the South some people saw her as worse than a ‘n*****’ because she worked for ‘n*****s.’

Rebecca toured with Fishbone for two years, honing her audio chops. “Mixing a club tour tends to be your first step as an engineer. Its hard knocks, different gear, various spaces, nothing is ever the same. A lot of club gear it is broken, blown and bad, but that’s where I learned the most about phase reversal and troubleshooting. Clubs tours can accelerate one’s understanding of gear, rooms, stage volume and band dynamics, and how they correlate. Invaluable.”

Over the next few years, she would work for unknown R&B artists whose labels told her that they were going to ‘blow up.’ “I learned very little about audio during that time, as there were many promises for support tour slots with large artists, but none of that ever happened. All that ‘blew up’ was my credit card bill when I couldn’t get paid.” She ended up getting a house monitor gig at Humphrey’s by the Bay through the audio company Sound Image.

At Humphreys, she got to mix folk legend Nanci Griffith, and a week later Rebecca was on her way to Nashville for rehearsals for the Newport Folk Festival Tour. Nanci Griffith had a large band with around 50 inputs, mainly playing sheds. During the tour, she met Wilco who was traveling with little production and they ended up hiring her for the run. Rebecca found it incredibly rewarding working with such amazing artists and talent. After that tour, she became the ME for The Bangles and would work with them for over ten years. Rebecca says “I love them; fantastic people and musically wonderful. Truly”.

Rebecca has just wrapped up a 15-month audio contract job as Media Director, for the Tibet House, the Cultural Center for the Dalai Lama in NYC. “It was the first 40-hour a week ‘job-y job’ that had a title beyond ME, FOH, A1 or A2.” She took 30 years of analog recordings and converted them to digital, installed a webcasting system in their event space, and started broadcasting meditation classes and programming. “I was relatively unqualified for the archiving bit − I’ve never been a DAW whiz −, so I researched best practice for file transfers and archiving. I also learned audio compression and ID tagging for the web, along with streaming and international content delivery networks. The webcasting part felt more familiar; it was live, mixing audio and switching cameras for broadcast. There were some initial hiccups; internet bandwidth issues, audio aux crapping out. I learned that webcast audiences are less frustrated if the picture is compromised rather than the audio: if they could still hear the webcast fine, the phones didn’t ring with complaints. Of course, I’m a bit partial, but I believe that audio is more important than lighting and/or picture. Radio had told visual stories before TV was even on the scene.”

When asked what her goals are now, Rebecca says that “beyond living through President Trump, I’d like to keep building my skills in still photography. My website is www.rebeccawilsonstudio.com  I also write screenplays.”

What, if any, obstacles or barriers have you faced?

“I’ve put more pressure on myself to measure up as a ‘soundman’ than any man ever has. The biggest obstacle I’ve ever faced is my own inner critic, feelings of inferiority, and the fears of being broke, old and alone (but with a good pair of headphones of course).”

How have you dealt with them?

“I had to come to terms with the fact that just because my life looks different than most, it doesn’t mean there’s anything wrong with me or my decision-making. I still consistently throw myself into new, unfamiliar aspects, [both] audio and things in general. It’s a terrifying and sometimes stressful way to live, but I hear people express they’re afraid to die with a lot of ‘should have’ regrets, that they didn’t step off the pavement. I’ll certainly die with a few extra stress wrinkles, but smiling. No regrets. I would encourage anyone reading to not fall for the safety net trap. Of course, be smart, but if you’re on the fence, just do it. No personal richness or outward success ever comes without some humiliation and failure. Learning to use it positively is key.”

What advice do you have for other women, and young women, who wish to enter the field?

“Speaking from a live concert sound perspective, check your motive for going into audio engineering. Is it sensational-based? Or do you have an affinity for sound waves moving through the air? You don’t need to know until you get a little experience in it, but it’s something to think about. I feel a strange timelessness and focus while listening to music critically. Early on, I think I heard the world more than saw it. My stuffed animal horse was named Beep.

I’d like to add that when you mix and tour, there’s a good chance you’ll be tired a lot. The job is physically demanding, lots of lifting, pushing and standing, [and] lots of bruises and cuts if you’re me. The other day my boyfriend joked that I have construction worker hands. I went into the bathroom and cried a little. Then I gave him a kick-ass back massage. I love my hands and ears. They’ve enabled me to travel the world.”

What do you like best about touring?

“Seeing the similarities of how humans inhabit the Earth.”

What do you like least?

“Who I became on tour (1996-2000 at least). Touring gave me a professional excuse to separate from all the people and circumstances I didn’t want to deal with at home. I lived with complete impunity and isolation (it was the late 90’s, before cell phones). It turned out, being ‘on tour’ and ‘unreachable’ didn’t bring me any peace or freedom like I’d hoped because the problems weren’t back home. The problems were how I saw the world, more specifically, what I like to call the ‘golden carrot syndrome’: the belief that the better place or thing was just around the next corner. I was always in mal-contentment mode; I wasn’t a fun person. I lived on cigarettes, red vines, coffee, booze and breath mints. I’d become the touring ‘shot-out’ cliché by [age] 26. I took a year-ish off in 2001 to regroup. I was happy to find that the problem was me and not everyone else. I went back to touring without all that baggage. Two different lives doing one profession. Lucky. Wouldn’t change a thing.”

What is your favorite day off activity?

“I’d made a rule early on not to get aboard anything that went faster than 15mph on off-days. I walk a lot, lay on grassy mounds in quiet parks, I like to troll a place without a plan, try to get a feel for how it connects to the last place we just were. See art museums and do yoga. Whatever is quiet.”

Must have skills:

“Tenacity and kindness. If you’re a woman, don’t get hard and crass. Leave that to men.”

Favorite gear:

“Sennheiser G-series in-ear monitors with 3D molds by Sensaphonics. They change lives. The Westone’s triple driver pair is good too.”

Parting Advice:

“If you’ve read this far, you’ll probably really make a great soundperson. You’ve got dedication and longevity. Here are some non-technical things I’ve found that matter more than audio mathematics and algorithms ever have:

“The most valuable thing I can give a musician besides a feedback-free stage is my full attention. Try to always be scanning the stage for someone who needs something. During soundcheck I walk the stage and stand right behind them, listening from their perspective. If the band doesn’t soundcheck, and they struggle to hear. After the show ask about the experience. Be prepared for criticism, but communication is fundamental as an ME. If you don’t understand what they’re describing, keep asking questions. Lots of musicians lack the vocabulary beyond ‘tin can sound.’ Help them find words for what they are experiencing. If you can’t dial in a solution, look into a new piece of gear for it.

“Also keep in mind, some days nothing will sound good to them (or you) − don’t reset the console mid-tour because someone is hungover. The greatest enemy of ‘good’ is ‘better.’ If everyone is happy onstage, I don’t turn knobs, especially if I think ‘this will make it a little better.’ It’s a rabbit hole and changing things mid-song can really upset things onstage. Let the artist guide you. Once I get it up and running, on a good day, there’s not much to do.

“If things go wrong, GO OUT ONSTAGE. ME’s are usually listening in closets, as far as onstage crew. It’s our job. If the problem isn’t in your department, get the person who is responsible, if it is your department, trust that you’ll know a solution. One always comes. It’s a bit of gypsy stage magic. Don’t let the artist struggle out there alone.

“ME’s are there for mainly two reasons: One, to allow artists to connect to their instrument and what their bandmates are playing. Two, to give them [the artist] the sonic personal confidence to stand in front of a HUGE CROWD of people who paid a lot to hear them. Imagine how you’d feel if people shelled out a lot of money to hear you play and you couldn’t even hear what you are playing. I’ve been hit in the head with flying drumsticks and bottled waters because I was staring at the console for minutes on end. It’s a serious thing they and you are doing. Artists are out there totally exposed. It’s your job to give them clothes. Even if I have a bad feeling about a certain gig or day, I try and be calm when they come onstage; it transmits to them. Don’t do crack or Red Bull.

“Lastly, always have a spare vocal mic with a long cable. Always. And check it.

“Over and out − Rebecca.

“P.S. when you’re starting out, never call a cable a cord.”

More on Rebecca

Driven To Excellence: Inside The World Of Multifaceted Audio Professional Rebecca Wilson

Rebecca Wilson on Roadie Free Radio

Find More Profiles on The Five Percent

Profiles of Women in Audio

 

Defence Against The Dark Arts – A Monitor Engineer’s Guide to RF

In my last couple of posts, I talked about the process of getting ready for a new monitor gig, from getting the call, right up to dialing the band’s mixes in. I touched briefly on RF, but it’s a big topic, and one that merits its own post, especially in a monitoring context. In this post, we’ll look at the basic principles which will give you a good foundation for a clean radio platform. As RF is a complex subject, it’s beyond the scope of this article to go into great detail, so I’ll also offer a few links that will give you more in-depth information about the science behind it all – and it is science, as much as you’ll hear it referred to as a dark art! I advise reading up on it as much as you can within an audio context, but there’s no need to get caught up in the math beyond a basic understanding unless it interests you and you plan to specialise. I also recommend attending the training days that are sometimes offered by major manufacturers like Sennheiser and Shure, as they give you a great chance to ask questions face to face. But for the basics that will serve you well, here we are – a monitor engineer’s guide to RF.

Firstly, make sure you have the right tools for the job.

Just because a transmitter and receiver from different manufacturers are in the same frequency range, it doesn’t mean they’re compatible. Compansion (compression > expansion) is the process by which a signal is compressed before transmission, and then re-expanded in the receiver. It’s important for the compansion circuitry in a system to be compatible with its ‘other half’, for optimum performance and signal-to-noise ratio, so make sure your transmitters and receivers are designed to be used together.

Choose the right antennae.

That usually means directional paddles over twigs for radio mics, and if they’re active, have them set on the lower gain. (Higher gain means they pick up over a greater area, but they pick everything up, not just the frequencies you want, and 3dB is ample for most stage applications.) A helical or ‘bubble’ antenna for IEMs offers superior reception to a paddle, but be aware of the polar coverage – typically a 40 degree cone-shape, so keep that in mind when you position it.

Minimise connections.

Every connector in the path of an antenna cable results in some RF signal loss, so avoid extending RF cables and using excessive adaptors and panels.

Maintain direct line-of-sight between transmitters and receivers.

An antenna that’s tucked around a corner and can’t ‘see’ the stage won’t do its job well, and keeping an artist’s IEM pack antenna on the outside of their clothing is good practice where possible. You may have to negotiate with the wardrobe department if you’re doing a costume-heavy show, but it’s very normal for wardrobe to make a little fabric pouch for the pack to sit in.

Use the right cables.

It’s easy to mistake a BNC cable that’s intended for the back of a desk (ie MADI) for an RF cable, as they have the same connectors – but they have different impedance and you need to keep them separate. RF uses a 50-ohm cable, digital data uses 75 ohms. It’s also worth using a specific low-loss cable such as RG-213 with N-type connectors for IEMs – they are thicker than standard cables and BNC connectors and lose a smaller amount of RF signal – especially useful in circumstances where you have no choice but to run longer cables.

Keep those cables short.

An RF signal would always rather travel through air than cable, so keep cable lengths to a functional minimum – never use a 10 metre cable if a 5 metre will reach. If you need more than 10 metres, reassess the positioning of your racks to see if you can get them closer.

Get high.

Height is your friend when it comes to antenna placement, so take stands up to their fullest extension. Diversity receiver paddles for radio mics can be close to each other – a minimum of 1/2 wavelength is good practice – the wavelength for a 700mHz signal is around 40cm, so a T-bar on a single stand is fine. Keep some distance between receiver paddles and your IEM transmitter antenna though – I usually put my IEM antenna nice and high near the downstage edge of my desk, and the receiver paddles at the upstage side.

Set your squelch.

Squelch is a muting mechanism that silences the audio output of your receiver should an erroneous signal cut across it. This is a good thing – that signal can be a lot louder than the desired one (ie your IEM mix) and can give the listener a nasty blast of noise. We want to set the squelch low enough to allow our desired signal through, but high enough to keep out the uninvited. Around 7-11 dB is a good all-rounder – if you set it too high, the desired signal will also be muted more easily when your artist moves further away (because of signal loss).

It’s not enough to simply have clear spectrum

(ie nothing else transmitting) around your frequencies. Not all frequencies play nicely together, and they can intermodulate – a phenomenon whereby they interfere with each other, even though they may not be close by in range. Most manufacturer’s equipment will therefore have preset ‘groups’ of frequencies that are compatible, and there are also charts available, as well as software that can calculate compatible frequencies for you. When everything is set up and tuned, you can check for intermodulation by switching every transmitter and receiver on, then switching one transmitter off at a time, checking to see that the associated receiver has lost all RF signal, and then switching it back on to repeat the process with each transmitter/receiver pair. A little tip here – make sure that your radio mics are not all sitting in a pile – the proximity will make them intermodulate no matter how compatible the frequencies, so spread them out on a work surface with their antenna ends pointing away from each other.

Be aware of the effect that LED screens have on RF

they transmit low-level interference, so you may need to play around with optimum antenna placement. If it’s just a single backdrop screen it shouldn’t be too bad, but if you have an entire stage made from an LED screen, as on one tour I did, you may need to enlist the help of an RF expert who can fix you up with a high-powered booster for your transmission.

Be aware that RF hates metal.

(Not the music, it’s quite a fan of many hard-rock bands I believe…) No, RF hates to touch metal hardware, so keep packs off metal belts or costume parts, and make sure antennas aren’t resting on metal walls or truss. It’s all to do with an interesting phenomenon called the Faraday effect.

Get the right tools 

I highly recommend investing in a hand-held scanner if you tour and use RF regularly. Some places you will switch your receivers on to see a hot mess of RF coming from who-knows-where (TV stations and cellphones have a lot to do with it!), and it saves you a whole heap of trouble to get a visual of what’s going on in the spectrum rather than flying blind. Then you can look for the ‘quiet’ gaps, and plan your frequencies accordingly.

Finally, use your ears!

RF is a science, but the end-user – your artist – is a piece of biology! Test out their experience before you hand them their RF equipment – walk the performance space with their pack (not a PFL pack on engineer mode – that won’t tell you if there’s anything wrong with their hardware) and talk to yourself in their mic the whole time – that way you’ll experience any problems for yourself and have time to fix them before they walk on stage, so they have a happy, peaceful RF time up there. And we know what happy artists make, right? Happy monitor engineers!

X