Welcome to Hard Truths, the series on the LANDR Blog where we cut through the noise and take on a harsh reality from the world of music production. This is the advice you might not want to hear—but will make you a better producer.
Most producers get pretty excited about their tools.
The hottest new plugins trigger pages of forum hype from obsessive musicians chasing a better mix.
But we rarely talk about the tradeoffs and drawbacks that come with the processes we use in every session.
My hard truth for today? EQ, compression and other types of mix processing can do more harm than good—in fact, using no processing at all often sounds better.
That’s not meant to be discouraging. You can easily reduce the negative effects by using your tools properly and making good decisions at every stage in your process.
In this article I’ll explain some of the issues with your basic mix effects—and how to avoid them.
Phase shift with EQ
Most mix engineers apply some amount of EQ to every track in their session.
After all, the frequency balance of each track is crucial to make sure the individual instruments in your mix can be heard clearly.
That’s how good EQ reduces the effect of masking. Masking is when instruments have similar amounts of energy in the same frequency areas and cover each other up when mixed together.
Unfortunately, no EQ is perfect. Any EQ you use is just a set of filters. Those filters have certain properties that you’re stuck with, no matter how sophisticated your plugins are.
When you filter frequencies out using an EQ, you make a slight change to the phase of the signal.
When you filter frequencies out using an EQ, you make a slight change to the phase of the signal.
Phase in audio can get complicated, but all you need to know for now is that it means very small differences in the timing of a signal.
Changes in phase aren’t really a big deal for single tracks.
But differences in timing between two related signals can create destructive interference that causes your tracks to compete with each other.
Any time you record something with more than one source at the same time—such as multiple microphones or a mic and a DI—phase issues can show up.
The problem can even occur between samples and instruments in layered kicks and basses.
Here’s where EQ comes in. When you add EQ to only one of a pair of related tracks, you risk creating destructive interference—even if the tracks were perfectly in phase before.
That’s because the EQ’s filter introduces phase shift.
Here’s an example. I have two identical sine wave tones on two different tracks.
Without EQ, listening to both at once makes the sine wave sound louder.
But invert the phase of one sine wave 180 degrees and they cancel out perfectly.
The signal is gone. The two tracks are exactly the same so the result is 100% destructive interference!
But adding a filter changes the outcome. With a high pass filter inserted, inverting the phase of one track no longer cancels the sound out completely—even when the filter’s cutoff frequency is much lower than the sine wave’s fundamental.
The difference between the two inverted tracks is what will be lost to destructive interference when the two signals combine.
The difference between the two inverted tracks is what will be lost to destructive interference when the two signals combine.
Now imagine that was a pair of key tracks in your mix!
Anytime you EQ two related tracks differently you risk introducing destructive interference from phase shift.
The best way to avoid it? EQ related tracks on a bus. Or better yet, skip the EQ entirely. Get your sounds as close to their finished state as you can right from the start.
Nonlinearities and saturation
My next unexpected side effect happens in plugins that claim to give you the warmth and saturation of vintage gear.
We’d all love to own the classic gear that was used on legendary albums, but analog hardware offers more than just vintage vibe.
Analog circuits introduce their own quirks that are extremely difficult to recreate with digital plugins.
I’m talking about nonlinearities. Nonlinearities are the unpredictable harmonics that saturation and distortion create in a signal.
Most plugins designed to give you a warm, vintage sound use some kind of saturation to create their effect.
Most plugins designed to give you a warm, vintage sound use some kind of saturation to create their effect.
Plugin designers understand nonlinearities, but they lead to a specific problem in digital audio. Here’s why.
Saturation creates additional harmonic partials in a sound. These are the overtones that help your brain tell the difference between two different musical timbres.
But adding partials using saturation creates harmonics all over the frequency spectrum.
In fact, some are so high up that they can’t even be represented properly in a digital file. These ultra high frequencies create errors that translate into negative effects for your mix.
This is just another of the invisible consequences that comes from adding lots of processing.
Modern plugins use impressive tech to get around these limitations most of the time, but the best way to avoid them is to use saturation only at the right times.
If you simply decide that analog = good, you’ll end up piling on plugin after plugin and compounding the effects of this problem.
How to make it better
These problems might make it seem like there’s no way to win against the drawbacks of applying effects.
But we obviously need to use them for basic mixing tasks like reducing dynamic range and adjusting frequency balance.
To fix it you need to weigh the control that your processors give over the signal against any negative effects they might introduce.
That’s what pro engineers mean when they give advice like “less is more” and “get it right at the source”
The less processing you can do the better. And getting it right at the source rings true no matter what genre of music you make.
Your best bet is to learn to think about the mix before you even start. That way you can use your tools to your advantage, instead of fighting against them.