Audio normalization is a concept that comes up fairly often for new producers.

It’s a music production process that’s been around since the earliest days of digital audio, but is it still relevant?

It’s not always easy to tell whether to apply normalization or to avoid it completely—and getting it wrong can cause issues in your mix.

In this article I’ll break down what audio normalization is, when to use it and when to steer clear.

Let’s get started.

What is audio normalization?

Audio normalization is a process that increases the level of a recording by a constant amount so that it reaches a target—or norm. Normalization applies the same level increase to the entire duration of an audio file.

Normalization is typically used to scale the level of track or file to just within its available maximum.

If that sounds complicated, all it means is that normalization brings up the volume of a file to the maximum amount based on the loudest point.

Normalization is typically used to scale the level of track or file to just within its available maximum.

When to use audio normalization

Audio normalization might seem a bit old fashioned by modern standards. With all the less invasive ways to increase a track’s gain out there, why would you choose to use normalization?

Some applications come from the early days of digital audio.

Back then many components had limited performance when it came to dynamic range and signal-to-noise ratio.

Normalizing the audio sometimes helped get the best results from primitive AD/DA converters.

Normalization is still a common feature on hardware samplers that helps equalize the volume of different samples in the memory.

It’s handy in this situation because the dynamic range and signal-to-noise ratio remain the same as they were before.

You can use your sampler’s normalize function where needed as you build patches and presets.

Why not to use audio normalization

Normalization isn’t commonly used in many music production situations outside of sampling.

It might seem like a convenient way to bring tracks up to a good volume, but there are several reasons why other methods are a better choice.

Normalization might seem like a convenient way to bring tracks up to a good volume, but there are several reasons why other methods are a better choice.

1. Normalization is often destructive

DAWs revolutionized music production with their non-destructive workflow.

What does that mean? Think of a strip of reel-to-reel tape—to perform an edit you need to physically slice it with a razor!

After that you can’t just go back to the original. But in your DAW you could simply drag the corners of the region out to restore the file.

Unfortunately there are some operations in the digital domain that are still technically destructive.

Any time you create a new audio file, you commit to the changes you make. Normalization sometimes requires you to create a new version of the file with the gain change applied.

If you do this and discard the original, you’re stuck with the normalized version!

Before you reach for the normalize function, make sure there isn’t another method better suited to the job.

2. Other methods are better

Modern DAWs have many different options for changing the level of audio signals.

Since normalization is a constant gain change, it works the same way as many other types of level adjustments.

Before you reach for the normalize function, make sure there isn’t another method better suited to the job. There are few situations where automation, clip gain or a utility plugin can’t fix your volume issues.

These operations are non-destructive and more flexible than normalizing entire tracks.

3. Normalization can create inter-sample peaks

Many new producers are looking for the easiest way to make their songs loud. It’s a common mistake to think that normalization plays a role.

When it comes to raising the level of an entire track, normalizing is among the worst options. In fact, normalizing an entire track to 0 dB is a recipe for disaster.

When it comes to raising the level of an entire track, normalizing is among the worst options.

The normalize function finds the highest peak in the entire waveform and raises it to the target. With this peak touching the 0 dB maximum, things get unpredictable.

When digital audio gets converted to analog to play through your speakers, the filters that reconstruct the signal smooth out the curve between individual samples in the file.

Sometimes the arc between two points close to the ceiling can exceed the maximum! The result is clipping from inter-sample peaks. It comes out as distracting harshness and distortion in your music.

Gain staging—the better choice

Properly controlling the levels inside your DAW is called gain staging.

It means checking the volume of each element you record and making sure not to exceed a healthy level throughout your mix.

The general rule is to keep your tracks’ peaks around 9-10 dBFS and the body of your waveform at around -18 dbFS.


Gain staging is most important at your DAW session’s master bus. If you’ve left good headroom throughout your mix, you’ll have plenty of space at the master fader to ensure nothing will clip.

Gain staging is most important at your DAW session’s master bus. If you’ve left good headroom throughout your mix, you’ll have plenty of space at the master fader to ensure nothing will clip.

With all this extra room left over, turning your tracks up using the fader or any of the other methods I’ve mentioned should offer plenty of level control without resorting to normalization.

Mastering your mix

If you follow these guidelines for gain staging you might be surprised to hear how quiet your finished bounce seems in comparison to tracks on your streaming platform of choice.

It’s because finished tracks need mastering to get the right levels for consumer playback.

Mastering brings up the overall loudness of a finished mix to exactly the right volume—no intersample peaks, no wasted headroom.


Unlike normalization, mastering turns up the volume dynamically so that even quiet passages can be heard clearly.

It takes careful processing and lots of experience to get a good master.

The easiest way to do it right is to hire a professional or try AI-powered mastering online.

Get your levels right

Normalization has its place in music production, but you need to be careful where you use it.

Headroom, gain staging and signal level all influence each other. Understanding that relationship is how you get the most out of your mix and master.

If you’ve made it through this article you’ll have a great starting point for managing your levels and making the right decisions about normalization.