Kim Lajoie's blog

Let’s make music together

by Kim Lajoie on June 23, 2014

So, here are some drums.

They’re at 90bpm. Download them and add something cool. Maybe synths, maybe guitars or bass. Maybe weird glitch noises. Doesn’t have to be much. Just one instrument.

Shoot me a link to download your raw track. I’ll mix it with the drums and upload it. Then someone else can add another part.

Could be fun, yeah?

-Kim.

P.S. First person gets to decide the key / chords. :-)

Microshifting

by Kim Lajoie on June 16, 2014

Microshifting is a way of using a pitch shifter to thicken a sound. The pitch shifter is set to shift by a very small amount (usually less than a third of a semitone). Usually the pitch shifter adjusts each side of a stereo sound by a different amount – for example, the left channel might be shifted down by 15 cents and the right channel might be shifted up by 15 cents. Sometimes a very short delay (less than 50ms) is also added to the pitch shifted signal.

When the stereo pitch shifted signal is mixed with the original sound, the sound becomes thicker and wider. This is sometimes used on vocals or lead instrumental parts (such as guitars or synths) as a way of making them bigger without using backing harmonies or longer reverb/delays. In a way, it simulates a unison recording (where the same part is played three times and all three takes are layered). Microshifting has an unique sound, however, because the degree of pitch shift and delay is constant, whereas a unison performance will result in constantly-changing pitch and timing differences.

Microshifting is often used as an alternative to reverb in situations where a sound needs to be more diffuse but without the wash from a reverb tail. Because microshifting has a distinctive sound, it won’t be always be appropriate. It’s used commonly in pop music – especially modern energetic pop which often does not have much reverb. The best way to decide if it’s useful for you is to simply try it.

As a side note, many pitch shifters have a much wider range of control, and also have a feedback feature. This allows them to be used for outrageous special effects.

-Kim.

Pitch Correction Vs Expressive Control

by Kim Lajoie on June 9, 2014

Pitch correction is a funny thing.

Sometimes it can improve a vocal recording. Sometimes it can make it worse. For me, the key to this is in understanding the interplay between pitch and emotion.

For many inexperienced vocalists, pitch correction often improves their recordings. Their poor control of pitch results in performance expression that is inconsistent with the creative direction of the music. In other words, notes sound off-pitch in a bad way. So, pitch correction provides an improvement. It makes the notes sound more like what was intended.

For many experienced vocalists, however, pitch correction is either neutral (and a mild waste of time) or even makes the recording worse. Great vocalists with excellent pitch control will deliberately use pitch deviations in ways that support and enhance the creative direction of the music. In other words, they sing off-pitch deliberately, and it sounds good.

The human voice is not robotic. It’s amazingly fluid and expressive. Quantising to the most-common (I.e in tune) pitches makes about as much sense as reducing the dynamic or tonal range of a performance – it might be appropriate for the vocalist or the music, but know that doing so restricts the expressive range of the vocalist’s performance. For vocalists that don’t have the skill to control their performance with sufficient precision, reducing the expressive range of the recorded performance can result in an improvement.

But sufficiently skills vocalists can make effective use of both types of extremes of their (pitch, dynamic and tonal) expressive range – the extreme ends of their physical capabilities and the extreme subtleties of small changes.

-Kim.

Saturation – transient sounds vs sustained sounds

by Kim Lajoie on June 2, 2014

Saturation is what happens when audio is turned up too much – so much that the next device in the chain can’t handle it. The result is that the loudest parts of the sound are distorted and the quieter parts of the sound are left unchanged. This dynamic behaviour is similar to a compressor, except it’s much more extreme. Normally audio engineers try to avoid saturation and distortion as much as possible, but in the mix it can be used as a creative effect. The way saturation affects sound depends on the nature of the sound itself.

For sounds with strong transients (such as drums and percussion, or other ‘peaky’ sounds), saturation reduces the level of the transient peaks by distorting them. Because the peaks are very short, however, the distortion is sometimes not very noticeable. Instead of sounding like distortion, it sounds like the peaks have become noisier and dirtier. For some kinds of music, this is desirable. The power and impact of the sound is enhanced (even though fidelity suffers).

For sounds with a more steady level (such as organs or strings), saturation is often more noticeable because the sound is constantly being saturated. For these sounds, saturation usually adds brightness and harshness. Used tastefully, this can make a sound more exciting or aggressive. Too much saturation, however, will make the sound lo-fi or distorted.

In a dense mix, it’s usually possible to get away with more saturation because the noise created by the saturation blends in with the background of the mix.

-Kim.

Give yourself an unfair advantage

by Kim Lajoie on May 26, 2014

It’s important to know your strengths and weaknesses. Technically, professionally and personally.

On a technical level, you might consider areas such as musical styles, particular instruments, approaches to production and aesthetic. For example, my strengths include composition, keyboard, guitar, drum programming, mixing and mastering, clean to aggressive aesthetics, etc. My strengths don’t include recording large (>10) ensembles, jazz guitar (though I enjoy listening to it), singing (I prefer to get others to do that), etc.

On a professional level, you might consider areas such as prospecting, client/artist relationships, accounting, project management and strategic planning. For example, my strengths include understanding and empathising with artists, discipline with accounting and administration and balancing my workload while getting projects done. My strengths don’t include marketing myself (I rely mainly on word of mouth), interior decorating (my studio is more functional than beautiful) and creating cross-industry strategic partnerships. These are things I’m working on.

On a personal level, you might consider areas such as relationships with friends, family and partners, diet and exercise, work/life balance, engagement with non-musical activities and maintenance of your personal living space. For example, my strengths include caring for my personal health, maintaining good relations with my family and keeping my apartment in good condition. My strengths don’t include interior decorating or anything social outside music-related activities (such as gigs). I’m not sure how important that is to me. Probably less than it should be.

Knowing my strengths and weaknesses helps me to make deliberate decisions about how I capitalise on my strengths and how I focus my efforts on improving my weaknesses.

However, knowing your strengths and weaknesses isn’t about avoiding difficult work. Partly, it’s about knowing where you can do your best work. How can you use your strengths to give yourself an unfair advantage? How can you put your best foot forward? If you’re going to push yourself beyond your current capabilities, which direction will put you ahead of the pack?

It’s also about being strategic about managing your weaknesses. Which weaknesses will you ignore because they don’t matter to you? Which weaknesses will you route around or cover up? Which weaknesses will you focus on improving because they’re necessary to your music? Which battles will you fight knowing you have it five times harder than the next person?

You have many paths ahead of you: Which uphill battles will you choose? Where will you give yourself an unfair advantage?

-Kim.

When to use delay instead of reverb

by Kim Lajoie on May 19, 2014

Delay is, in essence, a very simple effect – it delays the audio so that you hear it later. When mixed with the original, you hear two versions of the audio – the original and the delayed version. Delay is often useful when set up on a send, similar to a reverb. Delay can sometimes be used instead of a reverb or in addition to reverb. Delays range from the very simple to very complex, but almost all have these two basic controls:

  • Delay time – This sets the length of time that the audio is delayed. Delay times less than 100ms are short – useful for subtle doubling and thickening of instruments. Delay times between 100ms and 500ms are often heard as discrete echoes and a useful in adding a lush background texture. Delay times longer than 500ms are long – useful for special effects.
  • Feedback – This feeds the delayed signal coming out of the delay back into the delay’s input. This adds more echoes, which makes the delayed sound thicker and causes the sound to take much longer to decay away. It’s somewhat analogous to the reverb time control on reverb processors.

Delay can sometimes be used as a substitute for reverb when you don’t want to add more diffusion to the mix. If the mix is supposed to be very dry and direct, delays can be a good way of adding depth and space without washing the sound out. Delays can also be useful for adding depth if a mix is already very diffuse (perhaps there’s already plenty of reverb and modulation).

Delays can also be used in addition to reverb. Using a delay?reverb chain (or reverb?delay, there’s no difference) on a send can very easily produce very lush ambience and sonic backdrops. Stereo delays (with a different delay time for left and right) are especially effective here. Use a feedback level of about 50% for extra lushness.

Stereo delays with short reverb times (less than 100ms) can be useful for making a sound wider and deeper. For foreground or percussion sounds this can often be distracting, but it works very well for background sustained parts such as synth pads or backing vocals.

-Kim.

Think before you pan

by Kim Lajoie on May 12, 2014

I’ve been thinking a bit about panning and stereo field lately. I’ve previously dismissed panning as an effective mix tool, yet I myself use panning for many mixes.

It’s really a question of how we use the stereo field. Panning is one common tool, but it’s by far not the only. I’ve written before about using tools such as chorus, phasers, delay and micro shifting to control the stereo field. And of course reverb too.

On reflection, I think there are three reasons to mix wider than mono:

  1. Diffusion. This is about making the sound source less distinct. By using the stereo field to spread a sound away from pure mono, we break down the illusion that the sound is emanating from a single definable location. The reasons to do this are obvious – to make the sound appear bigger or to push it further in the background. Chorus, delay and micro shifting are common tools to do this. I also include double tracking and panning in this – common techniques for rhythm guitars and backing vocals.
  2. Creative. This is about using location as a creative tool to surprise or delight the listener. Listen to Vertigo by U2, or anything from Sgt Pepper’s Lonely Hearts Club Band for example. Being only 1-dimensional, the stereo field is quite limited in its opportunities, but it’s available nonetheless. Obviously you should be aware of the environments in which your mix is likely to be played. Some environments are less forgiving of creative panning than others.
  3. Problem solving. This is where people get into trouble by using panning to solve problems such as masking. And this is what I’ve written about in the past. The short version is: I honk this is a bad idea. Every mix problem solvable by panning is better solved by other tools or techniques.

Do you agree? How do you use panning?

-Kim.

Mastering for loudness. Don’t do it. Or if you have to, try this…

by Kim Lajoie on May 5, 2014

While mixing is the process of making sure the sounds in a mix are clear and well- balanced, mastering is the process of making sure each song on a release is clear and well-balanced with the other songs on the release.

The tools available to a mastering engineer are similar to those used by a mixing engineer, but are often more subtle and precise. They have to be – they’re used for processing complex audio (the whole mix). Compressors design for mastering are usually much more gentle; the sound of extreme compression on the whole mix is almost always undesirable. Similarly, EQ design for mastering is usually a lot more precise; tonal changes to the whole mix usually affect many different individual sounds and can modify the mix balance in complex ways.

Part of the role of a mastering engineer is to make sure the final playback level of the mastered audio is appropriate for the style of music. Acoustic music like classical and folk tend to have a lower level than modern highly-produced music such as rock and dance. For music that can have a lower level, there is greater headroom for peaks; the audio can have a higher crest factor. On the other hand, music that requires a higher level mus have lower peaks and lower crest factor. This means more loudness.

A mastering engineer’s primary tool for increasing loudness is the limiter. Conceptually, this is similar to a compressor with extremely fast attack and high ratio. Limiters are often used toward the last stage in the processing chain to ensure that the final audio level never exceeds 0dBfs. In mastering, the limiter’s sole purpose is to reduce the audio’s crest factor while sounding as invisible as possible.

For a good mix of a good composition, the mastering engineer shouldn’t have to apply too much limiting. It certainly shouldn’t be audible.

We start to push the boundaries for audio that has a high crest factor or when the executive producer wants the final audio to be louder than the level normally accepted for the style of music.

For these types of situations, regular mastering limiters can be inadequate. While they’re usually designed to sound as invisible as possible, extreme loudness will require processing that is audible. For these type of situations, saturation – or even clipping – will be necessary. This often creates a harsh sound as transients are crushed (distorted). Some digital limiters can combine or blend clipping with limiting, to provide greater gain reduction than pure limiters with less harshness then pure clippers.

Because the mixdown contains all the sounds of the mix as a single stereo audio feed, any changes to the audio affect all the sounds that are playing at that time. For example, a spiky snare drum that is crushed in mastering will also result in all the other sounds that are playing at the same time to be crushed as well – whether they need it or not. This is why this kind of processing in mastering should be a last resort – it’s much better to address these kinds of problems earlier on: in the mix or during composition.

In some situations, multiband limiting is appropriate. This is a crude attempt to contain the audible effects of extreme limiting to a subset of the mix. Using multiband limiting, a spiky snare that requires more limiting than usual won’t result in the bass being simultaneously heavily limited. This approach can sometimes be necessary for addressing problems that would have otherwise been best fixed in the mix.

-Kim.

What’s your rush of inspiration? [Video]

by Kim Lajoie on May 2, 2014

So, as you probably already know, I’m doing a series of videos with some of the artists I work with. For this video, I asked them about their rush of inspiration. We all get that rush somehow, somewhere. Sometimes it’s in the studio as a mix finally comes together. Sometimes it’s on stage and your audience is feeling what you’re feeling. Sometimes it’s when a song starts to take form.

Anyway, here’s the video:

-Kim.

Acoustic treatment – soundproofing vs absorption

by Kim Lajoie on April 28, 2014

To some people it’s obvious. To many others, it’s a bit more hazy.

Acoustic treatment is not the same as soundproofing.

Not even a little bit. Yet, often I see the terms being used interchangeably. Or one term used when the other’s meaning is intended.

Acoustic treatment is about controlling how sounds behave inside the room. How and where they reflect, which frequencies are absorbed and how effectively, and (if you’re lucky) how the dimensions of the room affect its resonant behaviour. Acoustic treatment is about whether a room sounds lively, echoey, dead, boxy, etc.

Acoustic treatment usually involves controlling the quality of the surfaces – whether they’re hard (reflective) or soft (absorbent) and whether they’re flat (echoey) or curved (diffuse). If you’re lucky, you get to influence the size and shape of the room to control its resonant behaviour.

Soundproofing, on the other hand, is about how much sound gets in or out of the room. It’s about reducing the level of cars or birds or neighbours in your recordings. It’s also about reducing the degree to which your neighbours can hear you.

Soundproofing usually involves making sure all the walls are of a thick and solid construction (i.e. brick or concrete). It also involves stopping all the air gaps where sound can travel in or out of the room.

-Kim.