The longer I run sound, the more I realize how subjective a process it is to get a good mix. What sounds right to one engineer will sound terrible to another. What works for one artist on one song may not be the best approach for a different song or a different artist. In fact, some good engineers will tell you that the sound engineer is really the final musician in the chain—they “play” the soundboard and shape the tones and dynamics to achieve a desired musical effect. In short, mixing is art, and art is personal. We all have our opinions on what is aesthetically “good.”

Eric Friedlander

Eric Friedlander

Since mixing is so subjective, addressing one of the most complicated tasks in mixing—getting the right vocal/instrument mix—produces a variety of often-conflicting answers. To help get a wider perspective on this, I reached out to a few other sound engineers here at HARMAN Professional Solutions to get their take:

  • Eric Friedlander, Business Development Manager, Tour Audio (and former audio engineer and stage hand)
  • Doug Hall, Product Manager, Device Control (and former production manager, broadcast audio engineer and touring sound crew chief)
  • Pete Stauber, Technical Support Engineer (and long-time tour audio engineer)
Doug Hall

Doug Hall

    The three members of my panel have all been on the Insights blog before, so it was great to welcome them back.

When I brought up the topic of vocal/instrument balance, Eric Friedlander was quick to point out that part of the issue is not just the variance of opinions, but the variance of music styles as well:

I think about the style of music I’m mixing. Pop music or vocal-centric jazz could lead me to a more vocal-forward mix, whereas a rock show could allow me to sit the vocals a bit closer to the rest of the band. Once I’ve established what sort of overall goal I’m going for, it helps me make choices as far as what I do to make room for the vocals. 

Pete Stauber

Pete Stauber

Eric is obviously correct on this point, and the distinction often comes down to the main focus of the composition (and of the artist). Musical styles where the vocalist is the true musical center of a piece need a certain amount of “space,” so you can hear both the words to the song and the distinctive characteristics of the vocal performance. In other musical styles, such as rock, the instruments share equal “weight” with the vocals; the crushing guitar riff and instrumental solos are just as important as the singer. In those cases, you want to be sure the vocals share equal space with the other instruments, each yielding room at the appropriate point in the song.

Of course, that’s easier said than done. I asked my panel how to get a good vocal balance, and the consensus was that there are three main ways to accomplish this: stereo image placement, frequency adjustment and dynamics effects. Each of these is effective in its own right, and they can also be combined for maximum effect. Which approach(es) you use depends on the application.

Stereo Image Placement

When mixing vocals and instruments, stereo image placement is the act of panning certain instruments to make space. Eric Friedlander explains it this way:

Generally, I like to make as much use of the L/R space in the mix as possible, panning instruments out of center and making room down the middle for the main vocals. I may even cheat background vocals a hair left or right as well.

This is an old technique, but still tried and true. Doug Hall pointed out that The Beatles were famous for hard panning, even on their vocal tracks. Doug explained the effectiveness of panning by stating:

Panning stereo instruments is also a good way of mixing to create a layer of sounds and fitting in all of the musical elements in their appropriate perspective.

In other words, panning instruments and creating layers of sound allows you to “place” sounds in virtual space, and the reason why this works is psychological. Our brains consider anything to our left or to our right to be less important than what is in front of us. By leaving the most important element (the vocals) in the center, our brains think of that as being in front, and thus, we pay more attention to it than elements that are on the left or right.

You can also use a similar, related approach, using reverb effects. Rather than moving instruments to the left or right, using reverb effects instead moves instruments “back” in space (our brains interpret something with more echo as being farther away from us). This makes the vocals feel closer, and thus, clearer. Eric Friedlander states that this “helps instruments slide back a little bit in the mix,” though he rightfully points out that engineers should be careful with this approach, lest you lose detail or clarity.

Frequency Adjustment

Another method you can use to carve out space for vocals is by using frequency adjustment. Doug Hall explains this approach by stating:

A key element to ensuring there is proper room for the vocals in the mix is creating space in the vocal frequency range, using equalization. There is a lot of competition in the 200 Hz to 1 kHz range, where most of the vocal details are, so sometimes using EQ to cut the instruments or boost the vocals in that range can help.

The approach here is to locate instruments that are competing with vocals (pianos and acoustic guitars are often culprits), and use a parametric EQ node with a wide Q factor to scoop out frequencies in the 200 Hz to 1 kHz range. This allows the important vocal details to cut through the mix.

To balance this, you may cut out or reduce vocal frequencies that might detract from other instruments. This is the reason engineers often place a high frequency (HF) shelf on the vocal track. It eliminates muddiness and can help instruments stand out on the lower end, while giving vocals presence in the appropriate range. Pete Stauber uses this same approach on his kick and bass. Pete explains:

I usually give a little tight Q bump up in the kick at 60 Hz to 80 Hz range with a corresponding notch pulled down in the bass. I then give the bass a bump up around 100 Hz to 120 Hz (or up to 200 Hz, depending on the pickup on the bass) with a corresponding notch pulled down in the kick. That gives them their own “space” in the mix and sets them off quite distinctly. Without this EQ, they sometimes get muddy and “mashed” together and lose their distinct punch.

Some may rightfully point out that “notching” the tone this way does mean there is some tradeoff regarding the tone of the instruments. Eric Friedlander addresses this by explaining:

I may have an amazing snare or guitar sound, but nobody goes home humming the snare or rhythm guitar part, so I may trade a little bit of my snare or guitar tone to make more room for a competing frequency in a male vocal, for instance.

In other words, taking this approach may mean not having the perfect tone for each instrument in isolation. However, it can help the instruments and vocals sound better together.

Dynamics Effects

The third way you can achieve better vocal/instrument balance is by using dynamics effects to control variance in volume. Interestingly, this was the point on which our panel was most split. Each panelist had a slightly different approach to using compressors, but all three agreed that good compression settings were key to effective vocal placement.

Doug Hall prefers a fairly heavy compression, the goal being to ensure all the rich characteristics of the vocals are loud enough to cut through the instruments:

I like a 4:1 or 6:1 ratio and keep them pumping until I can hear an audible breathing, then back off the threshold. Some sound mixers don’t like to compress, so they ride the vocal fader. That is just too much work for me. I prefer to ride instrument solos to bring them to the front when needed.

Conversely, Eric Friedlander argues for lighter compression, suggesting that a wider dynamic range brings more life to the vocals, allowing them to rise above the instruments at appropriate times:

I’ve seen people get a vocal sit in a mix before, but then they compress their mix to the point where it has no dynamic range to jump out above the instruments. I’ll dial in a little bit of compression on the vocal channel strip, whatever is appropriate for the singer and their style, but I try to let as much dynamic range through. To me, that’s a different approach than when you’d use a compressor to manage a problematic singer (I’m assuming a talented and capable source here).

Finally, Pete Stauber votes for a bit of a middle path in parallel compression. Parallel compression is an approach that duplicates the signal and uses both the dry and compressed signals in two parallel channels. This allows you to have one channel that is more compressed and grounded in the mix, while having another channel that uses only a limiter, preserving most of the original dynamics.

 

Pete explains:

Proper compression and limiting on vocals is important, and parallel compression is a great way to do that. In fact, most vocals recorded and mixed in a studio (or live recording) are compressed in this manner. In live situations, you can usually use the compressor and the limiter together to achieve this level of compression. Then, you can lay the vocal in the mix, and it will not be overpowering, but won’t get lost either.

Of course, as I said at the outset, tastes and musical styles vary, so the compression approach you use depends on both the music and what you like. However, by using a combination of stereo image placement, frequency adjustment and dynamics effects, you can go a long way to ensuring the vocals properly meld with the instruments, creating a pleasing final result.

What is your favorite approach to getting vocals to sit in the mix? Share your insights in the comments.

Leave a Reply

1 comment

  1. J Palestina

    My mixes range from a 100% sequenced duo/trio to a full live 10 piece band. The sequenced music is a lot easier. My keyboard (Kronos) allows me to mix the music separately from the vocals and leave a space for them to fill during sound check. I find this to be easy and consistent. The 10 piece live band is a completely different issue. I use an XR18 with a pair of ip2000s. I have found that stereo imaging is the easiest to work with but pre-configuring a “starting point” set up saves time and guesswork. I pan instruments mirroring their stage position. This usually puts the bass and drums at center which is ok because they really don’t need stereo imaging. The vocals are panned in direct relation to where they are located on stage with a compression bump to push them ouy front. Guitar, keyboards and horns occupy a wider space centered on the stage location. You can pre- set this up as a starting point and make fine adjustments at sound check – all rooms are different anyway, so placement is usually ok, with sound check adjustments mainly in EQ and FX levels. Hope this helps.