Empowering the Next Generation of Women in Audio

Join Us

5 Sound Design Sketches 

Sound design is as much of an art form as painting, sculpting, acting, dancing… insert any of the visual or performing arts here. One could argue that sound design is totally a performing art! Effects, ambience, and music all have a huge impact on how a  story lands for an audience. It is the sound designer’s job to manipulate all the parts so they interact with each other appropriately to have a profound emotional impact.

In the same way that actors will do readings or visual artists will sketch for practice, we sound designers also need to practice our craft independently. This article contains ideas to make sound design a consistent, self-motivated practice. Rather than

going into technical how-tos, the following ideas assume the reader knows the basics of audio engineering and thus focuses on creativity and inspiration instead.

Record everything.

This is perhaps the most accessible exercise you can undertake as well as the most practically useful. Through a consistent practice of recording, sound designers tune into the world around us while building our sound effect libraries. Handheld recorders are not that expensive, or in the worst-case scenario, you can even use the voice memo app on your phone. Ideally, you have a handheld recorder and are capturing in stereo at the minimum.

Take your recorder everywhere. Start by grabbing ambiences. Really tune in and listen to what is happening in the world around you while you are recording and do not stop it until there is a lull in the action. (For example, it would be unfortunate to stop a  recording of a street in the middle of a car passing by.) When recording ambiances that stay pretty consistent such as walla, some nature ambiences or even room tone, grab no less than one minute – and that may even be too short sometimes. Capture any less and future loops will be tricky to edit or sound boring.

Tune in to the world around you for singular things that sound interesting close up too. Maybe a crosswalk signal makes a unique sound or you have a loud washing machine; get right up to it and record it.

Go beyond spontaneous recording and also make time to create noise. Breaking

celery can make a great sound for breaking bones; a suitcase latch can be a safety mechanism on a pistol; squeezing cooked mac and cheese in your hands could be a  basis for gore sounds. There is so much magic in recording an everyday object and making it sound like something else.

Do a sound design for a video clip.

The only way to get better at sound design is to do it as much as possible. Going back to our visual artist analogy, sound redesigns are to sound designers as pencil sketches are to painters. It’s our artistic exercise when we are not actively working on a  paid project.

A video clip can be from anything that excites you, such as a movie or a  playthrough of a video game. Try to find something that is not too current, as people who see this clip may have expectations for how it sounds based on their recent memory. The cool thing about this exercise is that the end goal can be whatever you want it to be. Once in a while, you should do a thirty-second to one-minute clip that is a  full sound design as a portfolio piece; but the real practice comes in taking even just a five-second clip and digging into one element, whether you focus on weapons, foley,  sci-fi, vehicles… When you focus on one element of a video clip, it is important to record, process, and otherwise create all the sound effects from scratch. Do not throw in sounds from a library unless there are elements that are impossible to create on your own or they are being used in a layer within a sound effect; the point of the exercise is to practice creating sounds from the ground up.

To level this exercise up, create redesigns for different genres to the same clip.  What do the objects in the video sound like if they are in a rom-com? Then, what if you took the same clip and designed it as a psychological thriller?

Sit down and play with a new plugin or piece of gear for 30 minutes.

To put it bluntly: there is no point in having five analog synths or fifty reverb plugins if you are not familiar with any of them. You can learn a lot by penciling in a set amount of time to learn something specific. Your work becomes so much stronger when

you adequately know your tools, and it is so much more effective to have a few plugins that you know well than a bunch that you do not know how to use!

Create a sound design prompt with ChatGPT.

Type in, “sound design prompt” – literally. ChatGPT will spit out a suggestion that has a ton of detail, or maybe even not that much at all. Hit “generate” until it gives you something inspiring yet challenging. Then, time-block this exercise too. Give yourself between two to four hours and go nuts. Approach it how you want; work in a new DAW,  or don’t; record sounds from scratch, or don’t. Really treat this like a “sketch” and make something detailed and unique. And see what you can come up with in just a few hours.  Give yourself an added bonus by purposefully integrating something new as discussed  in the previous “sketch.” Perhaps you aim to use a feature or keystroke in your DAW,  dig into a plugin, or try a new recording technique.

Listen.

An exercise many new sound designers take for granted, and perhaps one everybody fails to do in our fast-paced existence. Really listen. Tune in and take mental or written notes. Here are some considerations, and this is by no means an exhaustive list.

In real life: What do you hear around you at any given moment? Where are different sounds coming from?

When watching movies or television: How are different elements balanced in the mix? Can you identify the low, mid, and high-frequency layers of sound effects and how do you think they were made? What do you think the editors used for different effects?  How is sound used as a thematic and storytelling tool?

When playing video games: all the same things for movies, and even more. How does music change when you enter a new room, have low health, etc.? How are things spatialized? How is sound utilized to give players feedback? How did the sound designers keep events that play over and over from sounding boring?  When listening to podcasts: When is scoring used? How are scenes built with

sound? How is sound used to transition from one place to another?  When listening to music: Can you pick out all the instruments? When are individual instruments not playing and when do they come back? How are instruments panned? What effects are used? Zone into one instrument and pay attention to what it does for the whole song. And – how does the song end?

Take any one of these ideas and do a little bit every day. You would be surprised how many projects you complete and how much you improve over time! As with anything, the hardest part is getting started. Remember that the Mona Lisa was not painted in a day, and each “Star Wars” movie took months or years to sound design and mix. Leonardo DaVinci and Ben Burtt practiced their art their whole lives leading up to and past those accomplishments. Start now, and give yourself permission to just practice without expectations, experiment, and have fun along the way.

Designing with Meyer Constellation

Using an array of ambient sensing microphones, digital signal processing, and world-class speakers, Constellation modifies the reverberant characteristics of a venue and redistributes sound throughout the space – ensuring a natural acoustic experience. I am very fortunate to have had the experience to design with this system. The Krannert Center for the Performing Arts recently had one of these systems installed into their Colwell Playhouse Theatre. In this article, I will go over how I designed this system for the 2021 November Dance show, how I utilized the 100+ speakers, and how I shaped the environment of each dance piece.

I began the design process by grouping my outputs into zones where they could fulfill a certain purpose. In Cuestation; Meyer’s software interface for their D-Mitri systems; these groups are called buses. I utilized a total of ten buses and over 80 speakers out of the original 127. Paper working and making sure things were clear for my engineer was a new challenge. This system is large and I found color coding and adding legends with further notes really helped represent the system I needed, but also the system that would become the world for the show, audience, and art that dancers were bringing into the space.

These zones allowed me to create a truly immersive experience with the sound. I was consistently using the House Left and Right sides, Rears, and Ceiling buses. However, what I loved the most was the Sub bus. Rather than using the onstage subs with the arrays, I opted to use the installed flown subs. What I have experienced in previous designs is that I prefer the encompassing blanket of sound that subs give when they are flown from a distance. I really didn’t want to localize them to the stage. I did, however, use the Center and Front Fills buses to draw more attention to the stage and dancers. I found that I preferred this balance of sound and the image that is created as an audience member.

I also found that the color-coding, legends, and graphics really helped keep track of this system. It felt daunting at first, but this breakdown allowed me to easily manage all of my outputs. The dance productions here don’t get a ton of time for the tech process, so this setup really helped me adjust levels quickly and not get bogged. I hadn’t worked with this software for a show before and it comes with a learning curve. I needed to stay productive throughout the entire rehearsal process.

Playback also works differently in Meyer’s Cuestation. Playback is often triggered and played back in Wildtracks. Wildtracks uses decks – virtual decks that is. It felt reminiscent of my Dad’s tape deck when growing up. Even though the tech process for this production added several more decks and cues to my original paperwork, I will show you the initial documents and how I set up my playback.

Originally each dance piece had its own deck. You can also see that each dance had a varying amount of Cuestation Inputs. These are the Wildtrack inputs that I then assigned to my buses of speaker zones. For Anna and Jakki’s pieces, I received stereo files. Though this was less than ideal, I stilled sent the music to the buses and crafted a great sound for the piece. Subsequently, I was the designer for Harbored Weight, so I had more opportunities to work with stems and individual tracks to send and pan around the room.

This is the kind of world I like to think and live in as a designer. There was a fourth dance that used only live music. This one was titled Love and only had a Cellist mic’ed on stage. Harbored Weight also had a live pianist accompanying the dancers. With Cuestation, I was able to take the mic’ed signal from these instruments and also send them to my buses. I could do this for onstage monitoring for the dancers or artistically in the house. What I discovered though, was that I could achieve a beautiful presence in the house with the other half of this design – which involves Constellation.

I sculpted a unique constellation setting for each dance piece. This information would be saved within each Cuestation cue – thus being recalled for the top of each dance by the stage manager. Most of the choreographers really wanted a large-sounding reverb. One, in particular, asked for something as close to cave-like as possible. I love these kinds of design requests.

Not only was I able to start with a base setting like ‘large hall’, but I was also able to effect parameters like early reflections, which really helped create a huge immersive sounding space. I was up against a learning curve though. I realized that with the constellation cue, audience members would be applauding at the end of the dance and their claps would be accentuated and echoed around the theatre. I found this to be cool sounding, but obnoxious. This resulted in me having to program more cues and use more Wildtracks decks to turn off Constellation for the end of each dance.

Then there are the designated microphones that capture the sound that makes Constellation processing what it is. For Donald Byrd’s piece Love, I was able to put this already beautiful cello sound through the processing system and hug the audience with its sound and warmth. This really helped for a few reasons. The dance was set to several Benjamin Britten pieces and it was just the cellist and dancers on stage. One cellist can sound small in a large theatre and the choreographer really wanted a big full sound. I mic’d the cello with a DPA 4099, but also used the ambient microphones to capture the instrument and send the signal through the constellation processing and unique patch that I had created. I designed a really warm and enveloping sound that was still localized to the stage and gave the illusion of a full orchestra.

My design for the 2021 November Dance did not incorporate Meyer’s Spacemap side of Constellation. I was able to do everything artistically that I wanted and that the choreographers needed without using Spacemap. I do look forward to using it in future designs though. If this article intrigues you, I would highly recommend looking into Spacemap as well as Spacemap GO.

I love that I can find ways to be a designer and be artistic outside of the typical realm of what it means to be a sound designer. I challenge the idea that crafting a sound system that shapes the sound we play through it isn’t artistic. I think this article shows that this way of thinking is in fact art. Dance often defaults to left-right mains with onstage monitors and side fills, but contemporary dance is pushing against that envelope. Sound designers and other artistic minds need to be there to receive those pushbacks and birth a new way of making art. Much like how Meyer continues to develop innovative tools that help us be better artists and better storytellers.

    Photo credit goes to Natalie Foil. All other images within this article are from my personal paperwork for the 2021 November Dance production. 

 

Twi McCallum – Sound Designer

Twi McCallum works on sound design for theater, post-production, audiobooks, and commercials. She has been freelancing since 2018 for Broadway, off-Broadway, and for regional theatres. Twi recently started working full-time at Skywalker Sound in sound editorial, and she will be relocating from NYC to the Bay Area.

Twi grew up in Baltimore and worked throughout high school at the National Aquarium, where she learned ocean conservation and marine biology. During the summers they created a play that was performed at local libraries. They would write the script, create costumes, backdrops, props, and music. This was Twi’s introduction to theater. She would go on to attend Howard University, where she found a class called TECH, where she became a crew member working behind the scenes for student productions. Twi remembers her first production, “my first tech assignment was a dresser for the musical Anything Goes, and there was a moment during invited dress when I was standing in the wings waiting for my actor to come offstage for a quick change. And I must have been standing in front of a speaker because I suddenly felt a wash of sound effects and music cascade over my body, and although I knew nothing about speakers, mics, or engineering at that time, I knew that’s what I wanted to jump into.”

Twi was working towards a Theater Administration major, studying things like stage management, producing, and technical theater. “At the time, my focus was costume design, which is laughable now, but there were no sound design professors and I failed my lighting  and scenic design classes which is why I dropped out of school and moved to New York.”  Twi would eventually graduate from Yale School of Drama’s one-year sound program in May 2021, which was virtual due to covid.

Her first job in NYC was a technical apprenticeship at a dance company called New York Live Arts, which was the first time Twi learned the fundamentals of audio such as how to stand on a ladder to hang a speaker, using a c-wrench, dropping a file into QLab, what an XLR cable is, and the basics of a mixing console which was the Yamaha DM1000. Twi says she knew she wanted to be a sound designer “because I was more moved by watching the dance performances than I was mixing them, and of course, getting yelled at as a mixer because nobody talks to the sound person unless they need to scold.”

When the apprenticeship ended, Twi worked as a stagehand at the Manhattan School of Music while sending her resume to a bunch of theaters that Twi said: “I was grossly underqualified for.” Her first design gigs were for Cape May Stage, TheatreSquared, and Kansas City Reps– all regional theaters that took a chance on her.

During covid, Twi took a post-production internship at a foley studio called Alchemy, and because of that opportunity, she was immediately hired as an apprentice sound editor on two scripted television shows for NBC and STARZ which allowed me to join Motion Picture Editors Guild Local 700. Those jobs qualified her to be hired at Skywalker Sound.

What did you learn interning or on your early gigs?

My one quirk is that I write everything down… when I’m at work I’m constantly scribbling in a notepad. My first job in New York was a technical theater internship (although criminally underpaid and abusive) at a dance company called New York Live Arts. It was my first time learning the basics of audio, and I still have my notebook from 3 years ago. I wrote down everything I learned…what does this button on the Yamaha DM1000 do, this is how many pins an XLR cable has, this is what a cluster is vs what a line array is. There is nothing embarrassing about needing to take notes, and there were times that it saved me because someone on the staff would ask me a question about the system that nobody else could answer but there it was in my trusty notebook! Even when I transitioned into post-production last year, I began keeping a typed journal of things I learned every day. My first professional television gig was as a sound apprentice on STARZ’s The Girlfriend Experience season 3, and the first thing my sound supervisor taught me was the importance of making region groups in ProTools for every episode. A year later, I still refer to those instructions whether I’m working on a professional tv show or an indie film.

Did you have a mentor or someone that really helped you?

My first mentor in theater sound was Megumi Katayama. There was a time in my life 2-3 years ago when I didn’t know any sound designers and I was emailing as many of them as I could find to inquire about their process. Megumi was a recent Yale MFA graduate when we met, already making strides with sold-out productions. I told her that I wanted to apply to Yale, so she invited me to assist her on production at Long Wharf Theater, which allowed me to tour and interview at Yale for my application. To this day she is still the only designer I’ve ever assisted.

My other theater sound design mentor is Nevin Steinberg, a legend, known for mega Broadway shows like Hamilton, Hadestown, and Dear Evan Hanson. When I emailed him as a fan with no major work experience, he called me on the phone the next day to my surprise, and since then he and I have talked at least every few weeks the past 2 years, sometimes just to make sure I’m emotionally okay.

In post-production, my biggest mentors are Bobbi Banks (ADR supervisor), Dann Fink, (loop group coordinator), and Bryan Parker who is a Supervising Sound Editor at Formosa Group and spent 6 months training me in sound effects and dialogue editorial. As I begin a new journey at Skywalker Sound, I admire Katy Wood, who I plan to work closely with over the next year.

I would be remiss if I did not mention that mentors also show up outside of my craft as a sound designer. The folks who always recommend me for big jobs, introduce me to directors, and take care of me in the workplace are costume, scenic, and lighting designers like Dede Ayite, Adam Honore, David Zinn, Clint Ramos, and Paul Tazwell. I advise any sound girl to reach out to other artists outside of audio to build a robust community.

Career Now

What is a typical day like?

In theater, I typically spend two weeks prior to tech being hands-on preparing for a production. This includes chats with the director, conceptual meetings with the scenic & lighting designers, group production meetings, and visiting rehearsals as often as possible. I also do a lot of paperwork such as cue sheets, console files, gear lists, and ground plans. Tech is typically 1-2 weeks long, and thankfully the theater industry is progressing away from the brutal 10 out of 12-hour workdays and six-day work weeks. Tech means stepping through every page of the script with all of the actors fully encompassed in the design elements. Then, there are usually 1-2 weeks of previews, which means a short rehearsal during the day to fix notes and a public audience performance in the evening.

How do you stay organized and focused?

My calendar is the key to me staying organized, Google calendar works miracles. As lame as it sounds, I maintain a daily, weekly, monthly, and annual to-do list. Annual to-do lists may feel overboard, but you’ll feel rewarded when the holiday season arrives and you realize you accomplished a long-term goal that you visualized 10 months prior. I am still learning to stay focused while acknowledging focusing doesn’t need to look the same for everyone. When I’m working from home, I like sitting on my couch with my laptop and listening to my tv in the background so I don’t feel alone. The best advice about focus that I’ve gotten from artists: spend 15 minutes every day in complete silence (from a costume designer), and try spending the first 1-2 hours every day that you wake up without any technology (from a playwright). Reducing social media usage has become critical for me, especially the drama of Instagram and Facebook.

What do you enjoy the most about your job?

What I love the most about theater sound design is sitting in the audience watching my show and being swarmed with the real-time reactions of the audience. The laughter, claps, cries, and yells, especially if it’s a result of a perfectly timed sound effect, assure me that I’ve done a great job. In theater, you will hear lots of designers say this theory, “The design is good when you don’t notice it.” But I disagree with that because there’s a line between noticing when your design is bad versus noticing when your design is compelling the storytelling. I like to believe we go to the theater to not only notice the actors but to enjoy the physical world of the play (scenic and costumes) and visceral world of the play (lighting and sound). I want the audience to notice my gunshots, earthquakes, music transition, spaceship takeoff, alarm clock, etc because they’re small yet inspiring parts of the bigger puzzle. For example, I designed a production of STEEL MAGNOLIAS at Everyman Theater and my director was adamant about the big gunshot moment, so I drove the point home and made it terrifying. I loved reading the performance reports via email from the stage manager every night that noted the audience jumping and holding each other at the surprise of the gunshot.

What do you like least?

In theater, I dislike the lack of budgeting of time and money from producers, production managers, directors, and other folks in power. Money is always used as an excuse for why designers, including sound designers, cannot be given the resources, staffing, and pay to properly do our jobs. There’s also a disregard for equitable scheduling of pre-production, rehearsal, and tech that impacts our personal lives.

What is your favorite day off activity? 

I play a lot of zombie video games (team PlayStation), plus I spend time with my Goldendoodle and pet snails as my happy places in my personal life. I’ve been watching some television shows, which are new to me because I’m more of a film lover. It took me a month to finish The Walking Dead but it was worth it, and I love Money Heist, You, Pose, Judge Judy, Top Chef, and Squid Game.

What are your long-term goals?

In 5-10 years, my heart is gazed upon being a re-recording mixer and supervising sound editor for big-budget film, television, and video games. I’m leaving behind theater sound design to transition into theatrical producing, so I can focus more on my post-production career. Eventually, I would love to teach sound design at an HBCU.

What if any obstacles or barriers have you faced? How have you dealt with them?

“Making it” is hard. However, I like to believe many of us make it over that hump eventually. What I wished someone talked to me about 2-3 years ago is what happens AFTER “making it”. For me, the insecurities have not stopped. At 25 years old and well-accomplished for my age according to other people… I am still comparing myself to others, taking it really hard when I don’t get hired for a particular show, and constantly wondering if I will maintain a career of longevity. And as a woman of color, surrounded by men as well as white women who have consistent streaks of accomplishments, I feel this sense of failure more often than people imagine. There are days that I cry, I wonder if I should change careers, I question if I will ever outdo myself and my peers. It’s important that I’m real and honest about these things because I know I’m not the only woman of sound in the world to experience these growing pains. This is where making a self-care plan kicks in, often we discuss self-care regarding busy schedules and needing time off from work. But self-care is also needed as a reminder to love ourselves and balance the highs and lows of our careers, even the lows that we are embarrassed to talk to other people about.

Advice you have for other women and young women who wish to enter the field?

Try your hardest not to take underpaid jobs. Even when you are first starting, do not take a gig that does not pay at least the legal minimum wage. Money is important, despite being in a craft where we’re supposed to love what we do unconditionally. Women are already underpaid and under-hired in sound, which makes us even more valuable. Companies that thrive on underpaid labor should not exist. The only places you should “volunteer” your time are schools, mentorship programs, and community theaters, all with a grain of salt of course. If you ever need to weigh the tradeoffs of taking a certain gig, do not be ashamed to reach out to someone with experience to ask for advice.

Must have skills?

The most important skill, in theater and post-production, is being able to quickly learn software. This includes drafting software like Vectorworks and DAWS like ProTools. Once you learn the basics of the software you need for work, the next challenge is learning how to use them efficiently. “Shortcuts” become important in the workplace, especially in post-production when it saves you 60 seconds of labor if you know a keyboard hotkey compared to needing to navigate a menu for the same function. These skills are not simple to learn, so be gentle with yourself on this learning journey. There are manuals and flashcards for all software, even ProTools keyboard covers to purchase!

Favorite gear?

In theater, I love Meyer’s SpaceMapGo. I implemented the software on my Broadway play CHICKEN & BISCUITS, to help move music and atmospheric cues around the theater in a 3D motion. In post-production, a similar asset is a plug-in called Waves Brauer Motion.

Summary of accomplishments

More on Twi

Twi McCallum on Hiring Black Designers and Creatives

Design Thinking Strategies for Sound Designers


A few years ago, I attended a user experience design boot camp. That course taught me that UX is so much more than designing visuals for apps and websites. UX designers conduct a lot of user research to determine how an app should function, implementing what they call a “human-centered approach” to their decision making; that is, an approach that ensures the final product serves the user.

Since then, I have been meaning to write about the similarities between sound designers and user experience (UX) designers. Sound designers use design thinking strategies all of the time! Through careful analysis and experimentation, we consider the end-user product. For us, that’s usually a film, play, video game, podcast, concert, etc. Even though the tools are very different, the process is very similar. This article will examine the crossover between design thinking and the sound design process through the five phases of design thinking.

Phase 1: Emphasize with the user

The first thing user experience designers do is evaluate and research user needs through a “discovery phase.” They will conduct interviews with users surrounding their specific needs and desires around a product. They may also send out surveys or observe users’ nonverbal interactions.  What they are looking for is a problem to solve.  This first stage is really systematic because although researchers have a specific topic to evaluate, they do not go into the discovery phase with a pre-determined issue. They find it through interacting with users. This makes for an unbiased approach because the research is being conducted objectively and no one is making assumptions about end users’ desires and needs. This academic approach allows for discovering users’ needs so that the end product will actually serve them.

If phase one for the UX designer is about gaining an understanding of the user, phase one for the sound designer is about gaining an understanding of the message, environment, and characters within an experience. The sound designer’s discovery phase involves reading the script and talking to the director about their intentions with the story’s message. They may also begin to look at the work the visual team has done to begin to gain an understanding of the environment. Before talking to the team, the sound designer should have already read the script and begun to think about the message. However, they haven’t made any sure-fire decisions about how the experience should sound until after talking to the director and the team. Even if they have ideas, the sound designer keeps an open mind and conducts objective research.

In this sound design/UX design analogy, the director is the user, at least in this first phase. Much like UX designers, the sound designer first asks non-leading questions to understand what the experience needs; goes in with an unbiased approach, and is ready to pivot if their initial interpretations of the script are not in line with the director’s vision.

Phase 2: Defining the User’s Needs

The user experience designer has a bunch of quantitative and qualitative data from user interviews and tests — now what? The next step is laying out all of the information in a way where the data can be synthesized into findings. This is usually a very hands-on approach. A common technique UX designers will use is called affinity mapping. Every answer or observation is written on a post-it note, then “like” things are grouped together. The groups with the most post-its will inform the UX team about users’ most common and important needs and expectations. Then, they will begin to write up a problem statement, which is usually phrased as a question: “How might we [accomplish X thing that users need]?” Keeping it focused on the issue at hand keeps the approach unbiased and user-centered. The problem statement is a goal, not a sentence that is proposing a solution. The problem has not been solved yet; it has just been defined.

In the same way, UX designers define the problem statement, a sound designer’s second phase involves defining the message; the overall feelings and thoughts that the audience should take away from the experience. They may combine notes from their initial script reading and the conversations they have had. They may also go through the script for sound effects that are mentioned if they did not do that during the first read-through. This is where they define the world and mood of the experience. Some sound designers might even write down their own version of a problem statement, which is the goal or message of the experience. Sometimes in my work, I have found that it is helpful to have a goal for an experience written down so I can keep referring to it and checking that my work is in line with the tone of the piece.

In both roles, keeping a main goal or statement keeps the process about the end-user or audience. While a designer in either role might end up lending their own artists’ voice to a project, maintaining an unbiased approach (starting with a problem statement or message) keeps everything that is designed about the characters and the story.

Phase 3: Ideating

 After user experience designers spend all this time cultivating data, they get to start brainstorming features! A human-centered approach is very systematic; to create a meaningful and relevant product, designers can not get here without the first two phases. Every proposition for a feature is based on user research.

Similarly, the sound designer has defined the director’s expectations and the message, mood, and physical environment in the first two phases. The ideation phase is usually about watching and listening to reference material and beginning to gather and record audio. Much like user experience designers may not implement all of the features they think of, a sound designer might gather sounds that they do not end up using at all.

For both roles, this is when people are referring to their research and brainstorming ideas just to see what sticks. During the third phase, user experience designers are constantly referring to the research and problem statement, and sound designers are referring to their script and notes.

Phase 4 & 5: Building & Prototyping, and Testing

 This is where things begin to heat up! All that data starts to become a real, tangible experience. At this point, the user experience designer has developed a few prototypes. They can exist as paper prototypes or digital mock-ups. They may have a couple of versions in order to conduct usability tests and see what is most relevant and meaningful to users. Designers will build prototypes, test them, get feedback, build a new version and test again. A cycle exists between these phases, and whatever is discovered in phase five will influence a new phase four prototype, then it is on to phase five again for more feedback…Rinse and repeat until the design is cohesive (or the project runs out of time or money). Testing and getting feedback is very important, to make sure the work continues to serve the users or audience. 

A sound designer’s prototype is often the first pass at a full design. For theater, it can be cues they send to be played in rehearsals; for other mediums, it is about inserting all the audio elements and taking notes from the director. Then, they implement different effects for a few iterations until they reach approval from the director and producers. In the sound designer’s case, the director is akin to a beta tester in UX research.

During testing, user experience designers and sound designers have similar considerations to evaluate:

Phase 6: Iterate

Design thinking strategies are far from linear. Throughout the process, a user experience designer or sound designer refers to their initial research and notes to keep their decisions focused on the audience. They will prototype features (UX) or effects (sound), test them out, take feedback, redo, and test again.

Conclusion

A great sound design, while influenced by that artists’ voice, is unbiased and serves the story. A solid product design does the same thing; because at the end of the day, a user’s journey with a product is a story. Consciously implementing design thinking strategies also makes our approach as sound designers human-centered, resulting in stories that have a huge impact on the audience.  A solid, well-researched and thought-through design can bring a project to another level completely; by touching our audiences and end-users in deeply emotional ways, we provide a meaningful and relevant experience to their lives.

Making Sonic Magic from Auditory Illusions

So, if a tree falls in the woods, perhaps it makes infinite sounds

A few years ago, I attended a talk on wave field synthesis, and to say I was captivated feels like a sorry understatement. Wave field synthesis, if you are unfamiliar with it as I was, is a spatial auditory illusion and rendering technique that produces a holophone, or auditory hologram, using many individually driven loudspeakers. The effect is that sounds appear to be coming from a virtual source and a listener’s perception of the source remains the same regardless of their position in the room. Its application in theatrical contexts is very new, but as the techniques and technology slowly become more widely available, the potential for theatrical applications is astounding.

This introduction to wave field synthesis, in addition to being quite exciting, pointed me towards a categorical lack of knowledge about auditory illusions. Since then, I’ve been filling in the gaps and adding these illusions to my sonic toolbox. Now, quite a bit of theatrical sound design could be considered spatial illusions like, for example, when we recreate actual physical phenomena like the doppler effect. Auditory illusions, however, encapsulate many effects extending far beyond this.

Optical illusions have long been the inspiration for and integrated into visual arts. M.C. Esher’s work, for instance, presents the viewer with impossible objects and perceptual confusions. In psychology and neurology, the study of optical illusions has played a large role in understanding the visual perception apparatus. Due to the historical ease of reproducing and distributing visual material, as opposed to auditory material, visual illusions have long been widely encountered, studied, and applied in artistic works. The history of auditory illusions and their use in psychology, music, sound design, and elsewhere is much shorter.

Auditory illusions, much like visual illusions, reveal the deficiencies and oddities of our perceptual processes, but the auditory and visual systems have their own unique attributes. The field of psychoacoustics examines how the brain processes sound, music, and speech. Hearing is not strictly mechanical but involves significant neural processing and is influenced by our anatomy, physiology, and cognition. Researches have even found that how we unconsciously interpret sounds is influenced by our individual environments, backgrounds, and dialects. Auditory illusions provide key information in unpacking our auditory processes for psychologists and neurologists. In artistic applications, auditory illusions provide similar insight into our perceptional processes and illustrate that there is no one true sonic reality.

Dr. Diana Deutsch, a psychologist at the University of California, San Diego is at the forefront of psychoacoustic research and her work has utilized countless auditory illusions and sonic paradoxes. If you want to hear examples and read her work, visit her website here: http://dianadeutsch.com/. Due largely to her research, there has been increasing understanding of the cognitive factors in the auditory system and how it has evolved over time to help us interpret our sonic environments effectively. Psychoacoustic research has been applied in myriad contexts including modeling compression codecs like mp3s, software development, audio system design, drone flying, car manufacturing, and even, terrifyingly, in acoustic weapon development. In the arts, psychoacoustics and auditory illusions have been applied in musical contexts, sound art, film, and theater, though these applications are fairly nascent.

There are a number of types of illusions that can be roughly categorized as spatial illusions, perpetual motion, and non-linear perceptual effects. More auditory illusions continue to be uncovered and understood, so these categories aren’t rigid. Spatial illusions are already a mainstay of theatrical sound design. We frequently manipulate spatialization to make it seem as though sounds are coming from a particular source or direction other than the loudspeaker producing the sound. Holophones can be created in a number of ways including wave field synthesis as I’ve mentioned. Binaural recording is another example of spatial manipulation, reproducing interaural features and anatomical influences of the head and ear. All of these spatial illusions exemplify a distinction between the physical properties of the sound field and the perception of what listeners actually hear.

Unreal sounds created in the inner ear or brain are a part of our daily lives that we typically don’t notice, and there are several auditory illusions that mirror common visual illusions. A Zwicker Tone, for example, is the sonic equivalent of an after image. Illusions of Auditory Continuity show us that when an acoustic sound signal is momentarily cut off and replaced by another sound, listeners perceive the original signal to continue through the interruption. Through the familiar Precedence or Haas Effect, we perceive a singular sonic event when one sound is followed by another with a short delay time, and that we ascribe directionality based on the first arriving sound. While subtle, these are all valuable design techniques.

Less subtle are perpetual motion illusions. Pitch and tempo circularity is roughly analogous to the barber pole illusion in which a sound seems to be endlessly ascending or descending or a rhythm seems to be endlessly increasing or decreasing in tempo. Both pitch and tempo circularity encapsulate a number of techniques and effects. The Risset Rhythm and Shepard Tone are complex versions. The Shepard Tone most notably influenced the film score for Dunkirk and created a palpable sense of anxiety. Much like Esher’s impossible stairs, circularity illusions are both unsettling and entrancing, a powerful design technique.

There are a number of speech-related auditory illusions. Most famously, the Laurel/Yanny internet phenomenon of 2018 brought speech interpretation illusions into the spotlight. It also demonstrated the incredible subjectivity of our hearing. Similarly, The McGurk Effect presents a puzzling phenomenon in the interaction of vision and speech. When a visual component of a person mouthing a sound is paired with a different sound, listeners perceive neither of the two sounds, but instead a third sound.

Dr. Deutsch has amassed an immense number of Stereophonic Illusions including Phantom Words, Binaural Beats, the Glissando Illusion, the Octave Illusion, the Scale Illusion, the Tritone Paradox, and more. Her work shows us how differently people perceive the same sounds. When we listen to speech the words we perceive are influenced by our expectations, knowledge, dialect, and culture, in addition to the physical sounds we hear. Much of her work has also demonstrated how left and right-handedness influences how complex sounds are synthesized and localized in our heads. In the Tritone Paradox, utilizing sequentially played Shepard tones a tritone apart, some listeners hear the tone ascending while others hear it descending. The potential for designing sounds in which some of the audience experiences the inverse of what others experience is, to me at least, a riveting notion.

While this is brief overview is the tip of the ever-expanding metaphorical iceberg of auditory illusions, I have found that looking into psychoacoustics and auditory neurology provides incredible design techniques and ideas that are not always at our disposal. The potential here that I’m so excited about, is to create audience experiences that rouse questions about the subjectivity of their perceptions of the world around them. Audiences can leave the theater not believing their ears. It also illuminates a greater need for interdisciplinary collaboration and cooperation between fields that often feel disparate: psychology, neurology, audiology, engineering, music, sound design, etc. In my own work, I have yet to utilize almost any of this material (with the exception of spatialization techniques, of course), but it is leading me to think about designing for the whole head, the ear, the brain, and the mind. I so look forward to the continued integration of auditory illusions in theatrical designs, creating sound magic.

 

 

3 Easy Steps to Cutting Classic Cartoon Sound Effects

At Boom Box Post, we specialize in sound for animation.  Although sonic sensibilities are moving toward a more realistic take, we still do a fair amount of work that harkens back to the classic cartoon sonic styles of shows like Tom and Jerry or Looney Tunes.  Frequently, this style is one of the most difficult skills to teach new editors.  It requires a good working knowledge of keywords to search in the library–since almost all cartoon sound effects are named with onomatopoeic names rather than real words like “boing”, “bork”, and “bewip”–an impeccable sense of timing, and a slight taste for the absurd.

I used to think that you were either funny or not.  Either you inherently understood how to cut a sonic joke, or you just couldn’t do it.  Period.  But, recently, I began deconstructing my own process of sonic joke-telling and teaching my formula to a few of our editors.  I was absolutely floored by the results.  It turns out, you can learn to be funny!  It’s just a matter of understanding how to properly construct a joke.


WHAT NOT TO DO

Before I get into what to do, I think it’s important to point out what not to do.  When editors start cutting classic cartoon sound effects for the first time, they pretty much always have the same problem.  They stumble upon the Hanna-Barbera sound effects library and find some really funny sounds.  Bulb horns–those are always funny!  Boings–hilarious!  Splats–comic genius!  Then, one by one, they start sprinkling these in whenever they feel there’s a dull moment.

Let me say this once: A single funny sound effect is almost never funny.  It’s like blurting out the punchline of a joke without the setup.

Here’s an example of a joke: Someone stole my Microsoft Office and they’re going to pay.  You have my Word.  

I know this is a super lame joke… but it is a joke nonetheless and if you told it at a party, you’d probably be rewarded with an awkward groan/chuckle.  Cutting just a single bulb horn at a random moment is like yelling out “Microsoft Office!” in the middle of a party and expecting people to laugh.  It’s just not funny.  Cutting cartoon sound effects is not the artform of adding “funny” sounds randomly into a visual work, it’s the art of telling a sonic joke.  And to tell a joke, you need three parts: the introduction, the setup, and the punchline.  If you want to go one step further, you can add a bonus part: the tag.


AN EXAMPLE OF JOKE CONSTRUCTION IN PROGRESS

Love him or hate him, this video example of Jerry Seinfeld talking about his process in writing a Pop-Tart joke is very illuminating.  There are many different elements that go into how funny your joke will be perceived to be.  They are things like: how incongruous are the words (or sounds) to each other, how surprising is the punchline at the end, how well were elements from the setup woven back into the punchline, how well did you captivate your audience by the “story” of the joke.  With that in mind, it’s not hard to see why it would take two years to craft the perfect Pop-Tart joke.

Watch the video here.

ANATOMY OF A JOKE: THE INTRODUCTION

When telling a joke, this is your first sentence.  It lets the audience know where you’re starting.  In the case of Jerry’s Pop-Tart joke, this is when he starts talking about breakfast in the 1960s being composed of frozen orange juice and toast.  From this, we understand that this is going to be a joke about breakfast.

In sound, the importance of the introduction is all about timing. Take a Mickey and the Roadster Racers that one of our editors, Brad Meyer, and I worked on.  There was a sequence where all of the characters were driving around and Goofy was holding a stolen diamond.  It was incredibly valuable and he was nervous to be mistakenly caught with it and possibly taken for the thief.  At one point, he abruptly came to a stop, the diamond flew out of his car and landing in a Ferris wheel bucket.  The Ferris wheel then began to turn around, and the two characters (one good guy and one bad guy) scrambled to enter the bucket with it.  Up they went with the diamond to the top when it, of course, slipped from their hands, bounced down the spokes of the Ferris wheel one by one, and then landed neatly in Goofy’s car at the bottom.

In this sound design example, choosing the point at which we kick off the joke is key. Like I mentioned earlier, if we just sprinkle cartoon sound effects in whenever anything slightly “toony” happens in the visual, it’s not really a joke.  We’re just shouting funny-sounding words at a party.  Instead, we need to choose an exact moment to begin the joke.  That moment would be when the diamond flies out of Goofy’s car.  We chose a simple sail zip whistle to kick this off, and a glass clink when the diamond landed in the bucket. Those two sounds were our introduction to the joke. Keep in mind that from this moment, our goal is to make all of the following cartoon sound effects create anticipation leading up to the final “punchline” effect.

ANATOMY OF A JOKE: THE SETUP

In Jerry’s Pop-Tart joke, after introducing us to the idea that he’s talking about breakfast, he continues his setup by us about the downside of all of the prevailing breakfast foods of the 1960s.  Then, he announces the arrival of the Pop-Tart, likening it to the arrival of an alien spacecraft, and he and his friends were like “chimps in the dirt playing with sticks.”  As he points out–in that phrase alone, there are four very funny words: chimps, dirt, playing, sticks.

The setup is the story.  It takes us on a journey and gives us all of the elements we need to pull together the punchline.  But, notice that the more incongruous the elements of the setup, the better the punchline comes off.  What do breakfast, aliens, chimps, dirt, and sticks have in common?  Nothing.  Absolutely nothing.  This is exactly why it’s a great setup.

In sound, the idea is the same.  You kick off the joke with something that makes sense (like a sail zip for an item flying into the air).  In the example of the scene from Mickey and the Roadster Racers, we cut completely incongruous cartoon sounds for the landing of the hero and villain in the bucket (timpani hits), followed by a spin whistle for them scrambling to grab the diamond.  Then, when they got to the top, we cut different pitched glass “tinks” (ascending in pitch with each one) for the diamond falling and hitting spokes of the Ferris wheel along the way. Not only are all of these sounds funny on their own, but they are funnier because they are so different from one another.  Also note that these sounds, although different from one another, continue to build tension leading to the next moment.

ANATOMY OF A JOKE: THE PUNCHLINE

In the Pop-Tart joke, Jerry gives the punchline of wondering how they knew that there would be a demand for “a frosted fruit-filled heatable rectangle in the same shape as the box it comes in, and with the same nutritional value as the box it comes in.”  And he goes on to wrap it up by telling us that in the midst of hopelessness, the Pop-Tart appeared to meet that need of the people.  This punchline works because it harkens back to the introduction when Jerry tells us of the dire state of breakfast choices in America.  The people were in need, and a savior appeared.

In our sonic cartoon example, we did the same thing.  We started with an introduction of a sail zip, then lead to a whole batch of incongruous sounds that built anticipation, and then, as a punchline, we used a reversed sail zip to lead us to the final glass clink of the diamond falling into Goofy’s car.  Thus, the joke was bookended.

ANATOMY OF A JOKE: THE TAG

In Jerry’s example, he talks about wanting to develop an additional end to the joke when he ties in the “chimps in the dirt playing with sticks” with the Pop-Tart punchline.  This would be the tag.  In a cartoon, it might be one final sound at the end of the gag that really finishes it off, like two slow eye blinks from another character who just watched the joke take place.  When you see these visual “tags,” be sure that you always consider them part of the joke as a whole and keep the sounds part of the same family.


FINALLY, FARTS

Because you made it to the end of this incredibly long blog post, you shall be rewarded!  So, here is a video of my favorite comedian, George Carlin, telling fart jokes.  Being that we work in animation, we at Boom Box Post love nothing more than a good old-fashioned fart joke.  If you want extra credit, you can analyze this bit to see how the intros, setups, and punchlines work together.  Or, just sit back and enjoy the smell….

Watch the video here. 

This blog is a repost for Kate Finan at boomboxpost.com. Check out the original post here which includes audio clips.

 

 

Whose Job is It? When Plug-in Effects are Sound Design vs. Mix Choices.

We’ve reached out to our blog readership several times to ask for blog post suggestions.  And surprisingly, this blog suggestion has come up every single time. It seems that there’s a lot of confusion about who should be processing what.  So, I’m going to attempt to break it down for you.  Keep in mind that these are my thoughts on the subject as someone with 12 years of experience as a sound effects editor and supervising sound editor.  In writing this, I’m hoping to clarify the general thought process behind making the distinction between who should process what.  However, if you ever have a specific question on this topic, I would highly encourage you to reach out to your mixer.

Before we get into the specifics of who should process what, I think the first step to understanding this issue is understanding the role of mixer versus sound designer.

UNDERSTANDING THE ROLES

THE MIXER

If we overly simplify the role of the re-recording mixer, I would say that they have three main objectives when it comes to mixing sound effects.  First, they must balance all of the elements together so that everything is clear and the narrative is dynamic.  Second, they must place everything into the stereo or surround space by panning the elements appropriately.  Third, they must place everything into the acoustic space shown on screen by adding reverb, delay, and EQ.

Obviously, there are many other things accomplished in a mix, but these are the absolute bullet points and the most important for you to understand in this particular scenario.

THE SOUND DESIGNER

The sound designer’s job is to create, edit, and sync sound effects to the picture.


BREAKING IT DOWN

EQ

It is the mixer’s job to EQ effects if they are coming from behind a door, are on a television screen, etc.  Basically, anything where all elements should be futzed for any reason.  If this is the case, do your mixer a favor and ask ahead of time if he/she would like you to split those FX out onto “Futz FX” tracks. You’ll totally win brownie points just for asking.  It is important not to do the actual processing in the SFX editorial, as the mixer may want to alter the amount of “futz” that is applied to achieve maximum clarity, depending on what is happening in the rest of the mix.

It is the sound designer’s job to EQ SFX if any particular elements have too much/too little of any frequency to be appropriate for what’s happening on screen.  Do not ever assume that your mixer is going to listen to every single element you cut in a build, and then individually EQ them to make them sound better.  That’s your job!  Or, better yet, don’t choose crappy SFX in the first place!

REVERB/DELAY

It is the mixer’s job to add reverb or delay to all sound effects when appropriate in order to help them to sit within the physical space shown on screen.  For example, he or she may add a bit of reverb to all sound effects which occur while the characters on screen are walking through an underground cave.  Or, he or she may add a bit of reverb and delay to all sound effects when we’re in a narrow but tall canyon.  The mixer would probably choose not to add reverb or delay to any sound effects that occur while a scene plays out in a small closet.

As a sound designer, you should be extremely wary of adding reverb to almost any sound effect.  If you are doing so to help sell that it is occurring in the physical space, check with your mixer first.  Chances are, he or she would rather have full control by adding the reverb themselves.

Sound designers should also use delay fairly sparingly.  This is only a good choice if it is truly a design choice, not a spatial one.  For example, if you are designing a futuristic laser gun blast, you may want to add a very short delay to the sound you’re designing purely for design purposes.

When deciding whether or not to add reverb or delay, always ask yourself whether it is a design choice or a spatial choice.  As long as the reverb/delay has absolutely nothing to do with where the sound effect is occurring, you’re probably in the clear.  But, you may still want to supply a muted version without the effect in the track below, just in case, your mixer finds that the affected one does not play well in the mix.

COMPRESSORS/LIMITERS

Adding compressors or limiters should be the mixer’s job 99% of the time.

The only instance in which I have ever used dynamics processing in my editorial was when a client asked to trigger a pulsing sound effect whenever a particular character spoke (there was a visual pulsing to match).  I used a side chain and gate to do this, but first I had an extensive conversation with my mixer about if he would rather I did this and gave him the tracks, or if he would prefer to set it up himself.  If you are gating any sound effects purely to clean them up, then my recommendation would be to just find a better sound.

PITCH SHIFTING

A mixer does not often pitch shift sound effects unless a client specifically asks that he or she do so.

Thus, pitch shifting almost always falls on the shoulders of the sound designer.  This is because when it comes to sound effects, changing the pitch is almost always a design choice rather than a balance/spatial choice.

MODULATION

A mixer will use modulation effects when processing dialogue sometimes, but it is very uncommon for them to dig into sound effects to use this type of processing.

Most often this type of processing is done purely for design purposes, and thus lands in the wheelhouse of the sound designer.  You should never design something with unprocessed elements, assuming that your mixer will go in and process everything so that it sounds cooler.  It’s the designer’s job to make all of the elements as appropriate as possible to what is on the screen.  So, go ahead and modulate away!

NOISE REDUCTION

Mixers will often employ noise reduction plugins to clean up noisy sounds.  But, this should never be the case with sound effects, since you should be cutting pristine SFX in the first place.

In short, neither of you should be using noise reduction plugins.  If you find yourself reaching for RX while editing sound effects, you should instead reach for a better sound! If you’re dead set on using something that, say, you recorded yourself and is just too perfect to pass up but incredibly noisy, then by all means process it with noise reduction software.  Never assume that your mixer will do this for you.  There’s a much better chance that the offending sound effect will simply be muted in the mix.


ADDITIONAL NOTES

INSERTS VS AUDIOSUITE

I have one final note about inserts versus AudioSuite plug-in use.  Summed up, it’s this: don’t use inserts as an FX editor/sound designer.  Always assume that your mixer is going to grab all of the regions from your tracks and drag them into his or her own tracks within the mix template.  There’s a great chance that your mixer will never even notice that you added an insert.  If you want an effect to play in the mix, then make sure that it’s been printed to your sound files.

AUTOMATION AS EFFECTS

In the same vein, it’s a risky business to create audio effects with automation, such as zany panning or square-wave volume automation.  These may sound really cool, but always give your mixer a heads up ahead of time if you plan to do something like this.  Some mixers automatically delete all of your automation so that they can start fresh.  If there’s any automation that you believe is crucial to the design of a sound, then make sure to mention it before your work gets dragged into the mix template.

Sound Design in Another Medium

Sound Design is creating a world or character purely out of auditory vibrations.  We morph mood and meaning through music and sound effects. As showcased through pieces like Peter and the Wolf by Nikolai Rimsky-Korsakov, the aural medium can tell the story on its own.  More often than not, however, sound design is not a monolith and must integrate with visual mediums.  This opens the door for visual style elements to influence sound design.

When I took Sound Design as a course in college our main textbook was Understanding Comics: The Invisible Art by Scott McCloud.  McCloud boils comics down to its essence, choosing to focus on the assembly of narrative and representation rather than technique.  The philosophy behind choosing what to include and not is similar between the visual and the aural. Elements of design (rhythm, focus, contrast, form, movement) are also shared.  Using McCloud as a guide, take a look in the graphic novel section of the library as research for your next project. But why stop at visual, where else can we find inspiration?

The human body was gifted with several senses, and all of them can be used to evoke emotional responses.  Taste is an experience that occurs over time but is remembered as a static moment, much like a song. That particular meal has a temperature, different flavors competing and complimenting, and overall texture.  A song has dynamics, different instruments with melodies and harmonies, and an overall mood. Maybe the character in that particular film has a favorite meal that defines them. How should the accompanying theme add to the character development?  Think of Pippin singing to Denethor in Return of the King from the Lord of the Rings trilogy, a greasy meal, tomatoes popping and gristle squishing contrasted with Pippen’s haunting ode to his comrades in arms.

Another sense, smell, also shares similarities to sound.  While perfume is just as manufactured as a pop tune, it has the opportunity to provide insight into character design.  Imagine a femme fatale adorned with a power suit, her chosen scent is bound to be as bold as she is. Like sound, it also transforms through time.  When she first enters the scene, her interactions with those around her, and what happens in the wake of her absence correspond to the “top,” “middle,” and “base” portions of the bouquet.  Film cannot capture scent, yet, but the sound design can pick up on the “notes” of her cologne.

I recently have had the opportunity to try my hand at mixing mediums.  In August, I gave birth to a new little SoundGirl, and I wanted to share with her one of my favorite stories:  Roverandom by J.R.R. Tolkein.  I want her to follow along with me but also was have the story available if she was babysat by grandparents.  In my copy, the publishers thoughtfully included prints of Tolkein’s illustrations, and I used those as a guide for a fabric book and a radio play.  The mood and style permeate through the scene designs done in felt, while the narrative and characterizations are explored through sound effects, voice, and music.  Together the confluence is grander than the sum of its parts and makes me a better sound designer.

 

An Open Letter to Theatre Reviewers

The play includes more than the actors on the stage

Dear theatre critics and reviewers worldwide,

First off, I’d like to say thank you for the love and enthusiasm you have for live theatre.  While the general population launches forward to keep up with technological trends such as virtual reality, wearable gaming, augmented reality, high-def displays, and holographic video, some of us, yourselves included, are desperately clinging to the lost art of live performance.  While technological leaders spend billions of dollars trying to invent the next piece of equipment that will make that game or movie look so real you can touch it, theatres everywhere are struggling to get people into their auditoriums to witness what can only be described as the pinnacle of reality, and no, the irony is not lost on me.  We theatre-makers appreciate you because you still believe in the magic of theatre. You still come to the shows, you put your phones away, you pay attention, and most importantly, you report. We rely on these reports to get the word out about this beautiful piece of REAL magic that’s happening in the readers’ very same city. There’s just one little thing I want to discuss, though: There’s more to the play than just the actors on stage.

I’m a sound designer working mostly in regional theatre, and I would say 85% of the reviews I read don’t even mention designers or technical crew.  Now, I know that there is a lot that happens in this industry that people on the outside just don’t know about, so I get that, but if you are reading your program before the show starts, you’ve probably noticed that there’s an entire page dedicated to production.  There’s probably an artistic director, production manager, scenic designer, costume designer, lighting designer, and sound designer. There’s sometimes a projection designer, wig designer, music director, pit musicians, composer, choreographer, fight director, and honestly, probably some other designers/directors that I didn’t even know existed.  You will also most likely find a stage manager and sound engineer, a light board operator, spotlight operators, deck crew, wardrobe crew, audio crew, and all of the artisans that built, sewed, and painted all of the physical aspects of the show. At the level of theatre, I work on; I’d say there’s generally an additional 40-50 people contributing to the show that are never seen on stage.  Isn’t that also worth reporting on? The actors do an amazing job of taking audiences out of their worlds for a few hours, but would it even be possible to make that journey in a dark, empty, silent room?

This is not the first letter to theatre reviewers that I have written.  Several years ago, I kept reading review after review of shows that my colleagues and I had designed the sound for and never read even a mention of those designs.  All of those shows were reviewed by the same person, and I emailed him asking why he never reported on what the show sounded like. Given that most of those shows were musicals, I’d say the aural response was a pretty significant one.  His reply to me was that he didn’t know what sound design was, or that it was even a thing. I get that, I really do, but as a newspaper writer, aren’t you something of a journalist? Haven’t you been taught to investigate, research, and find out the whole story?  I gave him some enlightening information on the practice of sound design and waited on pins and needles for an improvement in his next review. I’m sad to report that I never received that satisfaction.

It’s not just sound designers that get this treatment, even though, as a sound designer, it is the area where I am the most sensitive.  Many of the reviews I have read of theatres in my area over the past year have had little to no mention of design or crew. Instead, the reviews have consisted of a paragraph or two sending glowing praise to leading actors in the show, the occasional shout-out to supporting ensemble members, and then the rest of the review reads like a book report telling us what the story is about.  Sometimes there is the rare and seemingly obligatory list of designer names at the bottom of the review like their editor told them they had to say something about design, so they mentioned the designers’ existence to appease the boss. I’m not a reviewer, so maybe I’m wrong, but I just don’t think that dedicating 75% of the column to writing the show’s Cliff’s Notes is a review of what actually happened in that room.

As I mentioned before, I know that what we do is mysterious, and sometimes difficult to understand, so here are some facts about regional theatre and the kinds of questions you should be trying to answer:

  1. Making a play takes a lot of planning!  The design team of a regional show will probably start that planning process 4-6 months ahead of the show’s opening, and meet every 1-2 weeks to discuss the show’s progress.
  2. Making a play also takes money!  A large-scale musical on the regional theatre level could cost $30k-$60k to get the show looking and sounding spectacular.
  3. Making a play takes research!  The next time you’re reviewing a show, take a look at the details.  Do those civil war era costumes match what you remember from history books?  Where did they come from? Did this theatre make them in-house? What about that authentic-looking Mid-Century Modern furniture that is so popular now.  That chair alone would go for $5000, so how did this theatre get it?
  4. Making a play takes technical knowledge!  See all of those lights moving, changing colors, and making interesting patterns on the stage?  Do you hear all of those sound effects swirling around the space? Can you hear the amplified voices blending with the music? This is not a My-First-System kind of thing.  Someone went to a lot of trouble to make that cool stuff work.
  5. Making a play takes coordination!  There are so many moving parts to a play, and once it starts, it has to keep going.  We can’t just skip over the hard parts, and if something goes wrong, someone has to make a quick decision on what to do to keep the train moving.  Who’s doing that, and how? How do the people on the ground know what do to? How much practice does all of this take?

So, reviewers of theatre, again, thank you for your dedication and love.  We really do appreciate it. But please, the next time you go to the theatre, try to answer the not so easy questions, because for this dying art, “the actors were great, and this story is a lot of fun,” is just not enough anymore.  We need you to help expose this world to those who don’t know what they’re missing, and this world has some pretty stiff competition in this modern and highly technical society.

X