Kim Lajoie's blog

Headroom (and the difference between what we hear and what the equipment hears)

by Kim Lajoie on February 10, 2014

Headroom is not a property of sound – it is a property of the equipment that processes sound.

Headroom is a measurement of how loud the peaks of a sound can go above the 0dB reference point before the equipment starts to distort. In digital systems, the headroom is usually exactly 0dB (unless you adjust your meters and gain staging so that your nominal level is lower than 0dBfs). In analogue systems, the headroom usually depends on the quality of the components and the way they have been calibrated. Many analogue systems have a headroom of around 18dB, although this can vary considerably depending on the intended purpose of the equipment.

Here’s the key – audio equipment usually behaves according to the peak level of the sound, but we perceive based on the average level. Sounds with a lower crest factor can be pushed louder than sounds with a high crest factor. Therefore, a lot of the effort in increasing the potential loudness of a sound is focused toward using sounds with a low crest factor and reducing the crest factor of existing sounds.

If you thought that was complex enough, it’s only half the story.

It’s a similar situation with frequency, although its much easier to understand. Quite simply, audio equipment treats (or tries to treat) all frequencies as equal. But we don’t perceive all frequencies as equal. Generally, we are more sensitive to upper-mid frequencies (roughly 1kHz – 5kHz). For sounds of
equal recorded volume level, sounds that have a greater concentration of energy in the upper midrange will be perceived as louder than sounds with their energy focussed elsewhere.


Energy in music

by Kim Lajoie on January 27, 2014

Energy in music is a relatively intuitive concept to grasp. We can usually identify when a moment in a piece of music has a high energy level or a low energy level. Generally, high energy is a combination of several of these factors:

  • Fast pace (not necessarily tempo!)
  • Dense instrumentation (many instruments
  • playing at once)
  • Instruments playing near the top of their pitch
  • range, or playing across a wide pitch range
  • Louder overall sound
  • Dense or complex rhythms

Obviously, low energy is the opposite, and is the combination of these factors:

  • Slow pace (not necessarily tempo!)
  • Sparse instrumentation
  • Instruments playing around the bottom or middle of their pitch range
  • Quieter overall sound
  • Sparser and simpler rhythms

Not all these factors need to be present for a moment to have high or low energy, but the more there are, the stronger the effect will be.


The most powerful tool

by Kim Lajoie on January 13, 2014

Gain (volume) is the most important and powerful tool available to the mix engineer. Each audio track is processed through a mixer channel and there are generally two points at the mixer channel where the gain can be adjusted:

  • Input gain – before any effects or other processing is
    applied. Usually this is controlled using ‘input gain’ at the top
    of the mixer channel.
  • Channel fader – after effects or other processing is applied.
  • Usually this is controlled using the channel fader at the bottom of the mixer channel.

Adjusting the input gain is important because it sets the level of the audio going into the effects chain. When working with analogue equipment, this is crucially important – too much level will result in distorted sound, too little level will result in too much noise. This is somewhat less important when working with digital equipment (especially when mixing entirely within a modern DAW) because they have a much wider dynamic range.

Regardless of whether you’re working with analogue or digital equipment (or a combination of both), it’s important to set the input gain of all channels so that the audio levels going into all the mixer channels is roughly similar. This makes it easier to balance the levels of the channels against each other and makes sure your effects further downstream behave consistently from track to track. The correct audio level depends on the mixer itself. For a lot of analogue mixers, an audio level somewhere between -12dB and 0dB is usually a good place to start. For digital mixers, around -24dB or -18dB is more appropriate. Set the input gain so that the audio is around that level when the channel fader is at its default position (also called ‘unity’) of 0dB.

The channel fader adjusts the final level of the audio after it has been processed. This is the control that the mix engineer uses to determine which audio tracks will be heard louder than others. While the input gain is largely a technical setting, the channel fader is a much more creative setting. This is where the focus of the mix (and the listener’s ear) is determined. It the biggest factor that determines whether a sound is in the foreground and background. The important thing to understand here is that not everything can be loud, not everything can be in the foreground.


This is What it Means (part 1)

by Kim Lajoie on January 9, 2014

So, I made this:

This is what it means. This is the raw and honest peek behind the curtain. Eight Obsessive Music artists talk about life, music, hopes and fears. I made this video to remind myself of how it feels to be an original artist. As a Melbourne-based producer and promoter I work with artists from a variety of backgrounds and perspectives on life. Sometimes what makes each one different is surprising. Sometimes it’s what they have in common that’s surprising. For this video series, I asked eight of my favourite artists to talk candidly about music, life, insecurities, excitement and pushing forward through difficult times.

Featured artists in order of appearance:
Jennifer Kingwell (
Mel Wilkinson (
Jeremy Doolan (
Larissa Agosti (
Steph Hickey (
Mark Joseph (
Simon Levick (
Brett Cusack (

Obsessive Music is at:


Well, that was a pretty big year (musically and personally)

by Kim Lajoie on January 1, 2014

How have you been doing over the last twelve months? Busy? Making music? I hope so.

2013 has been a huge year for me. Probably the biggest.

Progress has (mostly) been pretty steady, so it’s easy to lose sight of how far I’ve come. But then I look back and realise that while I’m focussed on taking each step at a time, I’m actually climbing a mountain. And it’s pretty steep.

Twelve months ago, I was working a 9-5 day job unrelated to music. I was fitting in my music activities on evenings and weekends. I was living in a house that I owned in the suburbs with my girlfriend of four years. I had just started the lease on my new (now current) studio and had begun setting it up for work.

Now, as I sit here writing this, my life looks very different. I’m now working on music pretty much all the time (I quit my day job at the start of the 2013). Most of the year I’ve been recording and promoting artists. I’ve also been teaching composition at Monash University on the side. I sold my house in the suburbs and rented near the studio with my girlfriend. And recently broke up with my girlfriend (she wanted to live the suburban dream and I chose music). I’ve also now got a publicist and a second producer on board working for me. I’ve made some great relationships and inspiring music with my artists.

Of the actual music work I’ve done this year, I’m most proud of (in no particular order) the three videos I produced for Bare Toes Into Soil, the three singles I mastered and promoted for Gosti, the two Community Kitchen compilations I curated and mastered, the ambitious single I produced with Jennifer Kingwell (and the awesome upcoming EP, which we’ve just finished), the several indie EPs I recorded (only one has been released online, the others will be out early next year) and the promotion I did for Iain Archibald‘s regional tour.

And next year’s going to be even bigger.

Let’s see how fast this thing can go.


The two things I do that make almost every artist pleased with my first mix revision

by Kim Lajoie on December 30, 2013

If you’re reading this blog, you probably do some mixing. Chances are, you sometimes mix other people’s music too – whether you recorded it yourself or not. If this is you, you’ve probably experienced dreaded ‘mix revisions’. You think you’re finished, but then the artist comes back for just one more thing. And another. And another.

It often doesn’t end well. If you pick up the change request for free, too many revisions will cause you to start harbouring resentment toward the artist. If you charge for each change request, too many revisions will cause the artist to start harbouring resentment toward you. If either of you go down that road, you’re both going to have a bad time. For the sake of the relationship, you’ve both got a strong interest in getting it right first time.

These days, almost every artist I work with is happy with the first version of my mix. And the reason for that is nothing to do with plugins or hardware or gear or technology. It’s exactly two things:

1. Talking at length before the mix sessions about musical influences and references.
This is so important. It’s extremely difficult to communicate the complex and subtle musical and sonic aspects of a mix. It’s hard enough when both people are highly experienced and technical. For a most everyone else, it borders on impossible. Add into that the various significant factors that affect the result of the mix, yet are almost entirely impervious to the mix process itself. These are factors like arrangement, performance and recording. As engineers and producers, we can tell the difference. But not everyone can. And relying on words alone can do more harm than good. Most of the words we use to describe sound are inherently ambiguous at the best of times – even within the engineering profession. Outside, it can be anyone’s guess what words like warm, sharp, thick or funky even mean.

To combat that, I always ask artists about the music they listen to and the music they’re influenced by. The best artists actually make mixtapes for me. We talk about life and music and sound and emotions and use other artists’ songs as common frames of reference. For many artists I work with, this actually happens before we even record. And it continues throughout the production process. In fact, I almost never work on an artist’s project without them physically with me. Which leads to:

2. Making sure the artist sits next to me while I’m mixing.
Even with the shared reference points, I often disagree with my artists. Mist of the time, they’re fairly minor disagreements. But with the artist with me while I work, there’s no chance that I’ll go off on a tangent without being pulled back into line. When I start going in a different direction than the artist intended, we catch it early and have a conversation about it. Sometimes it’s something as simple as hearing her/him explain it and then remediating. Sometimes it’s a bit more complex and we need to talk about creative direction, emotions and storytelling. We might need to explore different processing approaches. We might even need to try different edits. But by the end of the session, the artist walks away with a mix their happy with. Even more importantly, the artist has had a positive experience being listened to and understood.


PS. This is why lately I’ve been turning down opportunities to do mixes for people outside Melbourne. Sorry, but Skype doesn’t cut it. You need to be in the room with me. You need to hear what I’m hearing.

No-one reads a comic strip because it’s drawn well

by Kim Lajoie on December 29, 2013

Seth Godin:

No one goes to a rock concert because the band is in tune. They have to be close enough to not be distracting, but being in tune isn’t the point.

This is something that I think a lot of engineers and producers lose sight of. The production doesn’t have to be technically perfect. Even the performance doesn’t have to be technically perfect. Of course, it has to be good enough that it’s not distracting. But we’re not shooting for technical perfection. We’re shooting for emotional connection.

I’ve often said that technology is best when we don’t notice it’s there. When we can get on with what we’re actually trying to achieve. When we can communicate and connect freely and easily. And that applies to music too. We’re not here to mess around with gear and twiddle knobs and make the waveforms line up perfectly. We’re here to make music. We’re here to tell stories about feelings. The technology exists to serve that goal – no more, no less.


Sometimes it’s better to wait

by Kim Lajoie on December 23, 2013

This post was originally published on Zencha Music.

I recently had an interesting experience recording a song. The artist – who I’d worked with in the past and had seen play live several times – was dragging the song. And not just a little bit. It was dragging a lot. It was a completely different creative direction.

So after recording a couple of takes and listening back together, I pointed it out. And he was genuinely surprised. He hadn’t realised it at all. It was plainly obvious when I played a previous recording of the song – recorded only a few weeks prior. So we got talking about what was happening, and it turned out he had a pretty good idea.

It turned out he’d written the song for a girlfriend from whom he’d recently separated. And while the earlier recording of the song had an optimistic and earnest character, these current takes were much more subdued and bittersweet.

With this knowledge, we tried another take, this time trying to give the song some if its original lift. We got about halfway there, but the artist felt that he couldn’t deliver an authentic performance that came from an inauthentic feeling. And I think that’s true in general.

You can’t deliver an authentic performance from an inauthentic feeling.

If you’re the performer, you have to feel it. If you’re the producer, you have to make sure your performer is feeling it. It’s your job to help them find that feeling and bring it out through the music. And it’s your job to know how close you are to it and to know how to get the performer over the line.

And it’s your job to know when it’s not going to happen that day.

Fortunately the artist in my session wasn’t on a strict timeline. We chose to reconvene at a later date. Sometimes it’s better to prioritise the performance over the schedule.

Sometimes it’s better to wait.


Parallel Compression on the Whole Mix… why?

by Kim Lajoie on December 19, 2013

Well this is interesting:

We use parallel compression on drums. We use it on vocals. We use it on really anything and everything. So why not on the whole mix?


The pros are that you can get a little bit of extra thickness, movement and color in a fairly transparent way.

It looks like a decent list of tips. But, as someone who doesn’t use parallel compression on the mix bus, it doesn’t answer my first question: why?. The closest the post gets is the line a quoted above – “a little bit of extra thickness, movement and colour in a fairly transparent way”. That’s pretty vague though. There are so many ways to add thickness, movement and colour to a recording – at every stage from performance, instrument choice, mic technique, level, tone, dynamics, ambience, etc. What does parallel mix bus compression give me that’s different to everything else?

It’s an honest question.

I use parallel compression when I want to blend two versions of the same track with different processing and I want the blend balance to change depending on the dynamics of the recording. I’ve never found myself wanting to do that to a whole mix. When I want the mix to sound differently based on its dynamics, it’s usually section-by-section and I can make more effective and focussed changes by working on the arrangement or processing individual tracks.

Does anyone use parallel compression on the mix bus? Can you tell me why you use it?



Three EQ techniques that many people use (and why they’re wrong)

by Kim Lajoie on December 16, 2013

This post was originally published on The Pro Audio Files, with a somewhat less inflammatory title.

EQ is a pretty powerful tool. More powerful than almost every other tool in your mixing toolkit (second only to the volume fader). And with great power comes great responsibility.

Also, with great power comes great mistakes.

If you’re reading this, it shouldn’t be any surprise to you that there are plenty of ways people struggle with EQ. Sometimes the sound ends up worse than it started. Sometimes it takes far longer to get the sound than it otherwise should.

And so here are three different techniques people use that can sometimes do more harm than good. I know, I’ve used them too.

High pass everything by default

This is a good one. “High pass everything except kick and bass.” Every time. Cut out unnecessary junk. It’s just rumble and mud down there. Sound familiar?

Well, high passing everything is probably a good idea if you’re mixing a monster track with 60+ channels. If you’ve got that much stuff to jam together, most of the sounds will need to be pretty small.

But if your track needs a bit more life and realism, think before you high pass. Even better, listen before you high pass. Because they’re right – there’s some rumble and mud down there. But there’s also a lot of weight and warmth and vibe down there too.

By all means, don’t be shy about that high pass filter. But realise it’s just a tool. And sometimes it’s not the best tool for the job. Sometimes you need some weight and warmth and vibe – even for tracks that aren’t labelled ‘Kick’ or ‘Bass’.

Cutting your sound to shreds

Cutting is better than boosting, right? Well, sometimes. Do you use the ‘boost and sweep’ method to ‘find bad frequencies’?

Well, here’s the trap – when you’re looking for ‘bad frequencies’, you’ll find them. You’ll find heaps of them. Because when you boost a narrow range and sweep it all over the place, every frequency is going to sound pretty terrible. And then you’ll cut. And cut and cut and cut. How good is it that DAWs and digital mixers have four, five, eight, ten bands of EQ of every channel? Now you can cut all the bad frequencies!

If you know what I’m talking about, you’ll know that your sound very quickly starts to resemble a Worms game. And you can use a lot of different words to describe your sound, but ‘warm’, ‘thick’ and ‘juicy’ won’t be among them.

Don’t cut holes in your sound.

Forgetting to try switching polarity

Here’s one for the recordists out there. Any time you bring out two or mics at the same time, phase and polarity start getting interesting. Before you reach for EQ, try flipping the polarity on one of the channels (it doesn’t matter which one). Sometimes it’ll get you close to the sound you want faster than any EQ. You can fight and fight and fight with EQ, when a simple polarity flip will do all the work for you.

It won’t always work, but when it does you’ll feel like you just cheated a bit. That’s how easy it is.

So don’t stop thinking, and don’t stop listening. Don’t take anything for granted. And don’t forget that the less EQ you use, the more integrity is retained in the sound.

EQ is a powerful tool. So I recommend trying to find ways to use it as little as possible.