Kim Lajoie's blog

More about Mid/Side EQ

by Kim Lajoie on July 14, 2014

Mid/side processing is a different way of processing two audio channels. Most processors modify a stereo sound by applying the same modification to the right and left channel simultaneously. Some processors can have different settings for the right and left channels. Mid/side processors, however, work on the ‘mid’ and ‘side’ channel instead of left and right.

Two-channel stereo (left/right) audio can be transformed into two channel mid/side (and back to stereo) without damaging the audio. It’s a completely transparent (and reversible) process.

The mid channel contains all the audio that is common between the left and right channels. This includes mono sounds that are panned centre and the ‘central’ sound in stereo sounds. The side channel contains all the audio that is different between the left and right channels. This generally consists mainly of ambience (either natural room sound or artificial reverb) and any sounds that are hard-panned.

By adjusting the levels of the mid or side channels independently, the stereo width of the audio can be modified in a clean and natural way.

Interesting things happen when you start applying EQ adjustments to the mid and side channels independently. This allows the stereo width to be widened or narrowed (or even completely collapsed to mono) in different parts of the frequency spectrum. This is particularly useful for complex stereo audio, such as groups, the mix bus or mastering.

For some practical tips, see this post.

-Kim.

What is sidechain compression?

by Kim Lajoie on June 30, 2014

Sidechain compression is a special variant of regular channel compression. A normal compressor adjusts the output level of the audio based on the input level. Sidechain compression, however, adjusts the output level of the audio based on the level of a different audio channel.

This means that the volume of a channel reacts to the volume of another channel. The audio that the compressor is reacting to is often referred to as the ‘key’ or the ‘sidechain’.

There are two common uses for this:

  • Kick drum ducking. This technique uses the kick drum for the sidechain signal. It’s set up so that the compressed channel (usually the bass) is briefly turned down when the kick drum is sounding. It was originally used to make the kick drum bigger – by reducing the level of some other tracks (usually the bassline), the kick punches through the mix with relatively more presence and power. It’s most commonly used to compress the bassline (either bass synth or bass guitar), but is also used to compress synth pads, vocals or even other drum and percussion tracks. It’s become a recognisable and characteristic sound in a lot of electronic dance music.
  • Vocal ducking. This technique uses the main vocal channel as the sidechain audio. It’s set up so that the compressed channel is turned down when the main vocal is sounding. It was originally used in radio broadcast so that the music would be automatically turned down when the announcer or DJ started speaking. It can useful when mixing a song that contains a prominent foreground part (such as a guitar or vocal harmonies) that should be pushed to the background when the lead vocals come in. Ideally, however, this situation is best avoided by careful composition and arrangement.

In day-to-day mixing, there’s usually not much need to use sidechain compression unless you’re aiming to create a certain effect such as a pumping bassline for a dance song.

-Kim.

Let’s make music together

by Kim Lajoie on June 23, 2014

So, here are some drums.

They’re at 90bpm. Download them and add something cool. Maybe synths, maybe guitars or bass. Maybe weird glitch noises. Doesn’t have to be much. Just one instrument.

Shoot me a link to download your raw track. I’ll mix it with the drums and upload it. Then someone else can add another part.

Could be fun, yeah?

-Kim.

P.S. First person gets to decide the key / chords. :-)

Microshifting

by Kim Lajoie on June 16, 2014

Microshifting is a way of using a pitch shifter to thicken a sound. The pitch shifter is set to shift by a very small amount (usually less than a third of a semitone). Usually the pitch shifter adjusts each side of a stereo sound by a different amount – for example, the left channel might be shifted down by 15 cents and the right channel might be shifted up by 15 cents. Sometimes a very short delay (less than 50ms) is also added to the pitch shifted signal.

When the stereo pitch shifted signal is mixed with the original sound, the sound becomes thicker and wider. This is sometimes used on vocals or lead instrumental parts (such as guitars or synths) as a way of making them bigger without using backing harmonies or longer reverb/delays. In a way, it simulates a unison recording (where the same part is played three times and all three takes are layered). Microshifting has an unique sound, however, because the degree of pitch shift and delay is constant, whereas a unison performance will result in constantly-changing pitch and timing differences.

Microshifting is often used as an alternative to reverb in situations where a sound needs to be more diffuse but without the wash from a reverb tail. Because microshifting has a distinctive sound, it won’t be always be appropriate. It’s used commonly in pop music – especially modern energetic pop which often does not have much reverb. The best way to decide if it’s useful for you is to simply try it.

As a side note, many pitch shifters have a much wider range of control, and also have a feedback feature. This allows them to be used for outrageous special effects.

-Kim.

Pitch Correction Vs Expressive Control

by Kim Lajoie on June 9, 2014

Pitch correction is a funny thing.

Sometimes it can improve a vocal recording. Sometimes it can make it worse. For me, the key to this is in understanding the interplay between pitch and emotion.

For many inexperienced vocalists, pitch correction often improves their recordings. Their poor control of pitch results in performance expression that is inconsistent with the creative direction of the music. In other words, notes sound off-pitch in a bad way. So, pitch correction provides an improvement. It makes the notes sound more like what was intended.

For many experienced vocalists, however, pitch correction is either neutral (and a mild waste of time) or even makes the recording worse. Great vocalists with excellent pitch control will deliberately use pitch deviations in ways that support and enhance the creative direction of the music. In other words, they sing off-pitch deliberately, and it sounds good.

The human voice is not robotic. It’s amazingly fluid and expressive. Quantising to the most-common (I.e in tune) pitches makes about as much sense as reducing the dynamic or tonal range of a performance – it might be appropriate for the vocalist or the music, but know that doing so restricts the expressive range of the vocalist’s performance. For vocalists that don’t have the skill to control their performance with sufficient precision, reducing the expressive range of the recorded performance can result in an improvement.

But sufficiently skills vocalists can make effective use of both types of extremes of their (pitch, dynamic and tonal) expressive range – the extreme ends of their physical capabilities and the extreme subtleties of small changes.

-Kim.

Saturation – transient sounds vs sustained sounds

by Kim Lajoie on June 2, 2014

Saturation is what happens when audio is turned up too much – so much that the next device in the chain can’t handle it. The result is that the loudest parts of the sound are distorted and the quieter parts of the sound are left unchanged. This dynamic behaviour is similar to a compressor, except it’s much more extreme. Normally audio engineers try to avoid saturation and distortion as much as possible, but in the mix it can be used as a creative effect. The way saturation affects sound depends on the nature of the sound itself.

For sounds with strong transients (such as drums and percussion, or other ‘peaky’ sounds), saturation reduces the level of the transient peaks by distorting them. Because the peaks are very short, however, the distortion is sometimes not very noticeable. Instead of sounding like distortion, it sounds like the peaks have become noisier and dirtier. For some kinds of music, this is desirable. The power and impact of the sound is enhanced (even though fidelity suffers).

For sounds with a more steady level (such as organs or strings), saturation is often more noticeable because the sound is constantly being saturated. For these sounds, saturation usually adds brightness and harshness. Used tastefully, this can make a sound more exciting or aggressive. Too much saturation, however, will make the sound lo-fi or distorted.

In a dense mix, it’s usually possible to get away with more saturation because the noise created by the saturation blends in with the background of the mix.

-Kim.

Give yourself an unfair advantage

by Kim Lajoie on May 26, 2014

It’s important to know your strengths and weaknesses. Technically, professionally and personally.

On a technical level, you might consider areas such as musical styles, particular instruments, approaches to production and aesthetic. For example, my strengths include composition, keyboard, guitar, drum programming, mixing and mastering, clean to aggressive aesthetics, etc. My strengths don’t include recording large (>10) ensembles, jazz guitar (though I enjoy listening to it), singing (I prefer to get others to do that), etc.

On a professional level, you might consider areas such as prospecting, client/artist relationships, accounting, project management and strategic planning. For example, my strengths include understanding and empathising with artists, discipline with accounting and administration and balancing my workload while getting projects done. My strengths don’t include marketing myself (I rely mainly on word of mouth), interior decorating (my studio is more functional than beautiful) and creating cross-industry strategic partnerships. These are things I’m working on.

On a personal level, you might consider areas such as relationships with friends, family and partners, diet and exercise, work/life balance, engagement with non-musical activities and maintenance of your personal living space. For example, my strengths include caring for my personal health, maintaining good relations with my family and keeping my apartment in good condition. My strengths don’t include interior decorating or anything social outside music-related activities (such as gigs). I’m not sure how important that is to me. Probably less than it should be.

Knowing my strengths and weaknesses helps me to make deliberate decisions about how I capitalise on my strengths and how I focus my efforts on improving my weaknesses.

However, knowing your strengths and weaknesses isn’t about avoiding difficult work. Partly, it’s about knowing where you can do your best work. How can you use your strengths to give yourself an unfair advantage? How can you put your best foot forward? If you’re going to push yourself beyond your current capabilities, which direction will put you ahead of the pack?

It’s also about being strategic about managing your weaknesses. Which weaknesses will you ignore because they don’t matter to you? Which weaknesses will you route around or cover up? Which weaknesses will you focus on improving because they’re necessary to your music? Which battles will you fight knowing you have it five times harder than the next person?

You have many paths ahead of you: Which uphill battles will you choose? Where will you give yourself an unfair advantage?

-Kim.

When to use delay instead of reverb

by Kim Lajoie on May 19, 2014

Delay is, in essence, a very simple effect – it delays the audio so that you hear it later. When mixed with the original, you hear two versions of the audio – the original and the delayed version. Delay is often useful when set up on a send, similar to a reverb. Delay can sometimes be used instead of a reverb or in addition to reverb. Delays range from the very simple to very complex, but almost all have these two basic controls:

  • Delay time – This sets the length of time that the audio is delayed. Delay times less than 100ms are short – useful for subtle doubling and thickening of instruments. Delay times between 100ms and 500ms are often heard as discrete echoes and a useful in adding a lush background texture. Delay times longer than 500ms are long – useful for special effects.
  • Feedback – This feeds the delayed signal coming out of the delay back into the delay’s input. This adds more echoes, which makes the delayed sound thicker and causes the sound to take much longer to decay away. It’s somewhat analogous to the reverb time control on reverb processors.

Delay can sometimes be used as a substitute for reverb when you don’t want to add more diffusion to the mix. If the mix is supposed to be very dry and direct, delays can be a good way of adding depth and space without washing the sound out. Delays can also be useful for adding depth if a mix is already very diffuse (perhaps there’s already plenty of reverb and modulation).

Delays can also be used in addition to reverb. Using a delay?reverb chain (or reverb?delay, there’s no difference) on a send can very easily produce very lush ambience and sonic backdrops. Stereo delays (with a different delay time for left and right) are especially effective here. Use a feedback level of about 50% for extra lushness.

Stereo delays with short reverb times (less than 100ms) can be useful for making a sound wider and deeper. For foreground or percussion sounds this can often be distracting, but it works very well for background sustained parts such as synth pads or backing vocals.

-Kim.

Think before you pan

by Kim Lajoie on May 12, 2014

I’ve been thinking a bit about panning and stereo field lately. I’ve previously dismissed panning as an effective mix tool, yet I myself use panning for many mixes.

It’s really a question of how we use the stereo field. Panning is one common tool, but it’s by far not the only. I’ve written before about using tools such as chorus, phasers, delay and micro shifting to control the stereo field. And of course reverb too.

On reflection, I think there are three reasons to mix wider than mono:

  1. Diffusion. This is about making the sound source less distinct. By using the stereo field to spread a sound away from pure mono, we break down the illusion that the sound is emanating from a single definable location. The reasons to do this are obvious – to make the sound appear bigger or to push it further in the background. Chorus, delay and micro shifting are common tools to do this. I also include double tracking and panning in this – common techniques for rhythm guitars and backing vocals.
  2. Creative. This is about using location as a creative tool to surprise or delight the listener. Listen to Vertigo by U2, or anything from Sgt Pepper’s Lonely Hearts Club Band for example. Being only 1-dimensional, the stereo field is quite limited in its opportunities, but it’s available nonetheless. Obviously you should be aware of the environments in which your mix is likely to be played. Some environments are less forgiving of creative panning than others.
  3. Problem solving. This is where people get into trouble by using panning to solve problems such as masking. And this is what I’ve written about in the past. The short version is: I honk this is a bad idea. Every mix problem solvable by panning is better solved by other tools or techniques.

Do you agree? How do you use panning?

-Kim.

Mastering for loudness. Don’t do it. Or if you have to, try this…

by Kim Lajoie on May 5, 2014

While mixing is the process of making sure the sounds in a mix are clear and well- balanced, mastering is the process of making sure each song on a release is clear and well-balanced with the other songs on the release.

The tools available to a mastering engineer are similar to those used by a mixing engineer, but are often more subtle and precise. They have to be – they’re used for processing complex audio (the whole mix). Compressors design for mastering are usually much more gentle; the sound of extreme compression on the whole mix is almost always undesirable. Similarly, EQ design for mastering is usually a lot more precise; tonal changes to the whole mix usually affect many different individual sounds and can modify the mix balance in complex ways.

Part of the role of a mastering engineer is to make sure the final playback level of the mastered audio is appropriate for the style of music. Acoustic music like classical and folk tend to have a lower level than modern highly-produced music such as rock and dance. For music that can have a lower level, there is greater headroom for peaks; the audio can have a higher crest factor. On the other hand, music that requires a higher level mus have lower peaks and lower crest factor. This means more loudness.

A mastering engineer’s primary tool for increasing loudness is the limiter. Conceptually, this is similar to a compressor with extremely fast attack and high ratio. Limiters are often used toward the last stage in the processing chain to ensure that the final audio level never exceeds 0dBfs. In mastering, the limiter’s sole purpose is to reduce the audio’s crest factor while sounding as invisible as possible.

For a good mix of a good composition, the mastering engineer shouldn’t have to apply too much limiting. It certainly shouldn’t be audible.

We start to push the boundaries for audio that has a high crest factor or when the executive producer wants the final audio to be louder than the level normally accepted for the style of music.

For these types of situations, regular mastering limiters can be inadequate. While they’re usually designed to sound as invisible as possible, extreme loudness will require processing that is audible. For these type of situations, saturation – or even clipping – will be necessary. This often creates a harsh sound as transients are crushed (distorted). Some digital limiters can combine or blend clipping with limiting, to provide greater gain reduction than pure limiters with less harshness then pure clippers.

Because the mixdown contains all the sounds of the mix as a single stereo audio feed, any changes to the audio affect all the sounds that are playing at that time. For example, a spiky snare drum that is crushed in mastering will also result in all the other sounds that are playing at the same time to be crushed as well – whether they need it or not. This is why this kind of processing in mastering should be a last resort – it’s much better to address these kinds of problems earlier on: in the mix or during composition.

In some situations, multiband limiting is appropriate. This is a crude attempt to contain the audible effects of extreme limiting to a subset of the mix. Using multiband limiting, a spiky snare that requires more limiting than usual won’t result in the bass being simultaneously heavily limited. This approach can sometimes be necessary for addressing problems that would have otherwise been best fixed in the mix.

-Kim.