Kim Lajoie's blog

Click tracks

by Kim Lajoie on September 1, 2014

The debate about click tracks has always raised passionate responses. Are they killing music? Do only really overproduced artists use them? Or are they just like vegetables – really useful, healthy and important but totally bland?

If you’re new to this, a click track is an electronic metronome helping artists to keep time while recording their instruments. The click track lines up with the timing grid in your recording software. It helps you see very clearly if the musicians are playing on time.

Recording to a click track can be very helpful to you need to do a lot of post-production editing and overdubbing. If you need to adjust the timing, a click track helps locate where the notes aren’t aligning together on the grid. Click tracks can rescue you from having to spend more time and money on re-recording tracks if you make a lot of mistakes.

Having said that, recording to a click track can seriously kill the mood. Nothing in life, art or music is perfect, so small deviations in tempo shouldn’t really cause you too much grief. If you have good musicians who practice, listen to each other and generally make amazing music together, you won’t be relying on a click track to fix up timing mistakes later on.

-Kim.

Put your sounds into an upside-down triangle.

by Kim Lajoie on August 25, 2014

Think of all the sounds in a mix being contained in a triangle with one point facing the listener.

I usually draw it as an upside-down triangle, with the listener at the bottom.

The louder (closer to the listener) a sound can be, the fewer sounds can fit alongside it. The quieter (further away from the listener) a sound can be, the more sounds can fit alongside it. If all sounds must be equally loud, then they all end up far from the listener.

If one sound is close to the listener and all the other sounds are in the background, the mix can seem stark. If sounds can be spread around the triangle, with a few sounds close by and most sounds further back, the listener will experience an engaging and deep sound stage.

-Kim.

Why I don’t worry about bleed

by Kim Lajoie on August 18, 2014

It’s a fact of recording studio life – bleed happens.

‘Bleed’ is the residual sound picked up by microphones placed around the studio to capture multiple instruments. For example, it happens when a microphone placed next to an acoustic guitar also records sounds from vocalists and other instruments being played close by. Many producers and engineers believe bleed is something to be minimised and removed as much as possible. With pesky bleed in the way, it can be much harder to perform magic tricks like overdubbing and editing later on. Common strategies to reduce bleed include putting up sound barriers up between instruments, positioning microphones very close to the instrument or simply recording instrument tracks one at a time.

These strategies can have some unintentional consequences though. Putting up sound barriers can kill the vibe of a band playing together. Positioning microphones too close to instruments can also produce an exaggerated and unnatural sound on playback. And recording your instruments one at a time? Sure, you’ll have no problems with bleed, but if you’re recording a bunch of great musicians you might be killing the vibe unnecessarily. They rehearse and perform together, what do you think will happen when you make them play their parts individually one at a time? If you’re recording musicians who aren’t that great, you probably need to record them separately anyway.

Don’t worry about bleed. Good musicians who know what they’re doing don’t need magic tricks to fix up mistakes. Just play good music and the rest will take of itself.

-Kim.

How loudness is measured

by Kim Lajoie on August 11, 2014

The meters on your DAW channels or your mic preamps aren’t telling you the whole story.

When sounds are recorded, the microphone captures the continuous vibrations in the air and creates a continuously varying electrical signal that mimics the vibrations. Louder sounds have wider/stronger vibrations. When you have level meters on your gear, it usually shows you the ‘peak’ level – the strength of the electrical signal created by the microphone. Or the strength of the electrical signal that will be turned into air vibrations by your speakers. (Digital meters work much the same way – they just measure the digital signal, which is just a numerical representation of an electrical signal)

The trouble is, the peak level doesn’t exactly represent how loud we perceive the sound.

One of the (several) ways in which our perception differs from ‘reality’ is that we don’t hear extremely short sounds as loudly as longer steady sounds. Some level meters compensate for that (to show us a more accurate representation of how we hear) by slowing down the meter. By making the meter more sluggish, it doesn’t react as strongly to quick changes (short, sharp sounds) but still reacts strongly to longer steady sounds. This is often referred to as the ‘average’ or ‘RMS’ level (RMS stands for ‘Root Mean Square’ – a mathematical way to measure the signal slowly).

The crest factor of a sound is the difference between the peak level and the RMS level. Sounds with a high crest factor typically have a lot of short sharp peaks (e.g. a drum kit). For these sounds, a peak meter would show a high level but an RMS meter would show a much lower level. Sounds with a low crest factor are the opposite – they have fewer or lower peaks, or no peaks at all (e.g. an organ). For these sounds, a peak meter would show a similar level to an RMS meter.

-Kim.

More about Mid/Side EQ

by Kim Lajoie on July 14, 2014

Mid/side processing is a different way of processing two audio channels. Most processors modify a stereo sound by applying the same modification to the right and left channel simultaneously. Some processors can have different settings for the right and left channels. Mid/side processors, however, work on the ‘mid’ and ‘side’ channel instead of left and right.

Two-channel stereo (left/right) audio can be transformed into two channel mid/side (and back to stereo) without damaging the audio. It’s a completely transparent (and reversible) process.

The mid channel contains all the audio that is common between the left and right channels. This includes mono sounds that are panned centre and the ‘central’ sound in stereo sounds. The side channel contains all the audio that is different between the left and right channels. This generally consists mainly of ambience (either natural room sound or artificial reverb) and any sounds that are hard-panned.

By adjusting the levels of the mid or side channels independently, the stereo width of the audio can be modified in a clean and natural way.

Interesting things happen when you start applying EQ adjustments to the mid and side channels independently. This allows the stereo width to be widened or narrowed (or even completely collapsed to mono) in different parts of the frequency spectrum. This is particularly useful for complex stereo audio, such as groups, the mix bus or mastering.

For some practical tips, see this post.

-Kim.

What is sidechain compression?

by Kim Lajoie on June 30, 2014

Sidechain compression is a special variant of regular channel compression. A normal compressor adjusts the output level of the audio based on the input level. Sidechain compression, however, adjusts the output level of the audio based on the level of a different audio channel.

This means that the volume of a channel reacts to the volume of another channel. The audio that the compressor is reacting to is often referred to as the ‘key’ or the ‘sidechain’.

There are two common uses for this:

  • Kick drum ducking. This technique uses the kick drum for the sidechain signal. It’s set up so that the compressed channel (usually the bass) is briefly turned down when the kick drum is sounding. It was originally used to make the kick drum bigger – by reducing the level of some other tracks (usually the bassline), the kick punches through the mix with relatively more presence and power. It’s most commonly used to compress the bassline (either bass synth or bass guitar), but is also used to compress synth pads, vocals or even other drum and percussion tracks. It’s become a recognisable and characteristic sound in a lot of electronic dance music.
  • Vocal ducking. This technique uses the main vocal channel as the sidechain audio. It’s set up so that the compressed channel is turned down when the main vocal is sounding. It was originally used in radio broadcast so that the music would be automatically turned down when the announcer or DJ started speaking. It can useful when mixing a song that contains a prominent foreground part (such as a guitar or vocal harmonies) that should be pushed to the background when the lead vocals come in. Ideally, however, this situation is best avoided by careful composition and arrangement.

In day-to-day mixing, there’s usually not much need to use sidechain compression unless you’re aiming to create a certain effect such as a pumping bassline for a dance song.

-Kim.

Let’s make music together

by Kim Lajoie on June 23, 2014

So, here are some drums.

They’re at 90bpm. Download them and add something cool. Maybe synths, maybe guitars or bass. Maybe weird glitch noises. Doesn’t have to be much. Just one instrument.

Shoot me a link to download your raw track. I’ll mix it with the drums and upload it. Then someone else can add another part.

Could be fun, yeah?

-Kim.

P.S. First person gets to decide the key / chords. :-)

Microshifting

by Kim Lajoie on June 16, 2014

Microshifting is a way of using a pitch shifter to thicken a sound. The pitch shifter is set to shift by a very small amount (usually less than a third of a semitone). Usually the pitch shifter adjusts each side of a stereo sound by a different amount – for example, the left channel might be shifted down by 15 cents and the right channel might be shifted up by 15 cents. Sometimes a very short delay (less than 50ms) is also added to the pitch shifted signal.

When the stereo pitch shifted signal is mixed with the original sound, the sound becomes thicker and wider. This is sometimes used on vocals or lead instrumental parts (such as guitars or synths) as a way of making them bigger without using backing harmonies or longer reverb/delays. In a way, it simulates a unison recording (where the same part is played three times and all three takes are layered). Microshifting has an unique sound, however, because the degree of pitch shift and delay is constant, whereas a unison performance will result in constantly-changing pitch and timing differences.

Microshifting is often used as an alternative to reverb in situations where a sound needs to be more diffuse but without the wash from a reverb tail. Because microshifting has a distinctive sound, it won’t be always be appropriate. It’s used commonly in pop music – especially modern energetic pop which often does not have much reverb. The best way to decide if it’s useful for you is to simply try it.

As a side note, many pitch shifters have a much wider range of control, and also have a feedback feature. This allows them to be used for outrageous special effects.

-Kim.

Pitch Correction Vs Expressive Control

by Kim Lajoie on June 9, 2014

Pitch correction is a funny thing.

Sometimes it can improve a vocal recording. Sometimes it can make it worse. For me, the key to this is in understanding the interplay between pitch and emotion.

For many inexperienced vocalists, pitch correction often improves their recordings. Their poor control of pitch results in performance expression that is inconsistent with the creative direction of the music. In other words, notes sound off-pitch in a bad way. So, pitch correction provides an improvement. It makes the notes sound more like what was intended.

For many experienced vocalists, however, pitch correction is either neutral (and a mild waste of time) or even makes the recording worse. Great vocalists with excellent pitch control will deliberately use pitch deviations in ways that support and enhance the creative direction of the music. In other words, they sing off-pitch deliberately, and it sounds good.

The human voice is not robotic. It’s amazingly fluid and expressive. Quantising to the most-common (I.e in tune) pitches makes about as much sense as reducing the dynamic or tonal range of a performance – it might be appropriate for the vocalist or the music, but know that doing so restricts the expressive range of the vocalist’s performance. For vocalists that don’t have the skill to control their performance with sufficient precision, reducing the expressive range of the recorded performance can result in an improvement.

But sufficiently skills vocalists can make effective use of both types of extremes of their (pitch, dynamic and tonal) expressive range – the extreme ends of their physical capabilities and the extreme subtleties of small changes.

-Kim.

Saturation – transient sounds vs sustained sounds

by Kim Lajoie on June 2, 2014

Saturation is what happens when audio is turned up too much – so much that the next device in the chain can’t handle it. The result is that the loudest parts of the sound are distorted and the quieter parts of the sound are left unchanged. This dynamic behaviour is similar to a compressor, except it’s much more extreme. Normally audio engineers try to avoid saturation and distortion as much as possible, but in the mix it can be used as a creative effect. The way saturation affects sound depends on the nature of the sound itself.

For sounds with strong transients (such as drums and percussion, or other ‘peaky’ sounds), saturation reduces the level of the transient peaks by distorting them. Because the peaks are very short, however, the distortion is sometimes not very noticeable. Instead of sounding like distortion, it sounds like the peaks have become noisier and dirtier. For some kinds of music, this is desirable. The power and impact of the sound is enhanced (even though fidelity suffers).

For sounds with a more steady level (such as organs or strings), saturation is often more noticeable because the sound is constantly being saturated. For these sounds, saturation usually adds brightness and harshness. Used tastefully, this can make a sound more exciting or aggressive. Too much saturation, however, will make the sound lo-fi or distorted.

In a dense mix, it’s usually possible to get away with more saturation because the noise created by the saturation blends in with the background of the mix.

-Kim.