Kim Lajoie's blog

When to use delay instead of reverb

by Kim Lajoie on May 19, 2014

Delay is, in essence, a very simple effect – it delays the audio so that you hear it later. When mixed with the original, you hear two versions of the audio – the original and the delayed version. Delay is often useful when set up on a send, similar to a reverb. Delay can sometimes be used instead of a reverb or in addition to reverb. Delays range from the very simple to very complex, but almost all have these two basic controls:

  • Delay time – This sets the length of time that the audio is delayed. Delay times less than 100ms are short – useful for subtle doubling and thickening of instruments. Delay times between 100ms and 500ms are often heard as discrete echoes and a useful in adding a lush background texture. Delay times longer than 500ms are long – useful for special effects.
  • Feedback – This feeds the delayed signal coming out of the delay back into the delay’s input. This adds more echoes, which makes the delayed sound thicker and causes the sound to take much longer to decay away. It’s somewhat analogous to the reverb time control on reverb processors.

Delay can sometimes be used as a substitute for reverb when you don’t want to add more diffusion to the mix. If the mix is supposed to be very dry and direct, delays can be a good way of adding depth and space without washing the sound out. Delays can also be useful for adding depth if a mix is already very diffuse (perhaps there’s already plenty of reverb and modulation).

Delays can also be used in addition to reverb. Using a delay?reverb chain (or reverb?delay, there’s no difference) on a send can very easily produce very lush ambience and sonic backdrops. Stereo delays (with a different delay time for left and right) are especially effective here. Use a feedback level of about 50% for extra lushness.

Stereo delays with short reverb times (less than 100ms) can be useful for making a sound wider and deeper. For foreground or percussion sounds this can often be distracting, but it works very well for background sustained parts such as synth pads or backing vocals.

-Kim.

Think before you pan

by Kim Lajoie on May 12, 2014

I’ve been thinking a bit about panning and stereo field lately. I’ve previously dismissed panning as an effective mix tool, yet I myself use panning for many mixes.

It’s really a question of how we use the stereo field. Panning is one common tool, but it’s by far not the only. I’ve written before about using tools such as chorus, phasers, delay and micro shifting to control the stereo field. And of course reverb too.

On reflection, I think there are three reasons to mix wider than mono:

  1. Diffusion. This is about making the sound source less distinct. By using the stereo field to spread a sound away from pure mono, we break down the illusion that the sound is emanating from a single definable location. The reasons to do this are obvious – to make the sound appear bigger or to push it further in the background. Chorus, delay and micro shifting are common tools to do this. I also include double tracking and panning in this – common techniques for rhythm guitars and backing vocals.
  2. Creative. This is about using location as a creative tool to surprise or delight the listener. Listen to Vertigo by U2, or anything from Sgt Pepper’s Lonely Hearts Club Band for example. Being only 1-dimensional, the stereo field is quite limited in its opportunities, but it’s available nonetheless. Obviously you should be aware of the environments in which your mix is likely to be played. Some environments are less forgiving of creative panning than others.
  3. Problem solving. This is where people get into trouble by using panning to solve problems such as masking. And this is what I’ve written about in the past. The short version is: I honk this is a bad idea. Every mix problem solvable by panning is better solved by other tools or techniques.

Do you agree? How do you use panning?

-Kim.

Mastering for loudness. Don’t do it. Or if you have to, try this…

by Kim Lajoie on May 5, 2014

While mixing is the process of making sure the sounds in a mix are clear and well- balanced, mastering is the process of making sure each song on a release is clear and well-balanced with the other songs on the release.

The tools available to a mastering engineer are similar to those used by a mixing engineer, but are often more subtle and precise. They have to be – they’re used for processing complex audio (the whole mix). Compressors design for mastering are usually much more gentle; the sound of extreme compression on the whole mix is almost always undesirable. Similarly, EQ design for mastering is usually a lot more precise; tonal changes to the whole mix usually affect many different individual sounds and can modify the mix balance in complex ways.

Part of the role of a mastering engineer is to make sure the final playback level of the mastered audio is appropriate for the style of music. Acoustic music like classical and folk tend to have a lower level than modern highly-produced music such as rock and dance. For music that can have a lower level, there is greater headroom for peaks; the audio can have a higher crest factor. On the other hand, music that requires a higher level mus have lower peaks and lower crest factor. This means more loudness.

A mastering engineer’s primary tool for increasing loudness is the limiter. Conceptually, this is similar to a compressor with extremely fast attack and high ratio. Limiters are often used toward the last stage in the processing chain to ensure that the final audio level never exceeds 0dBfs. In mastering, the limiter’s sole purpose is to reduce the audio’s crest factor while sounding as invisible as possible.

For a good mix of a good composition, the mastering engineer shouldn’t have to apply too much limiting. It certainly shouldn’t be audible.

We start to push the boundaries for audio that has a high crest factor or when the executive producer wants the final audio to be louder than the level normally accepted for the style of music.

For these types of situations, regular mastering limiters can be inadequate. While they’re usually designed to sound as invisible as possible, extreme loudness will require processing that is audible. For these type of situations, saturation – or even clipping – will be necessary. This often creates a harsh sound as transients are crushed (distorted). Some digital limiters can combine or blend clipping with limiting, to provide greater gain reduction than pure limiters with less harshness then pure clippers.

Because the mixdown contains all the sounds of the mix as a single stereo audio feed, any changes to the audio affect all the sounds that are playing at that time. For example, a spiky snare drum that is crushed in mastering will also result in all the other sounds that are playing at the same time to be crushed as well – whether they need it or not. This is why this kind of processing in mastering should be a last resort – it’s much better to address these kinds of problems earlier on: in the mix or during composition.

In some situations, multiband limiting is appropriate. This is a crude attempt to contain the audible effects of extreme limiting to a subset of the mix. Using multiband limiting, a spiky snare that requires more limiting than usual won’t result in the bass being simultaneously heavily limited. This approach can sometimes be necessary for addressing problems that would have otherwise been best fixed in the mix.

-Kim.

What’s your rush of inspiration? [Video]

by Kim Lajoie on May 2, 2014

So, as you probably already know, I’m doing a series of videos with some of the artists I work with. For this video, I asked them about their rush of inspiration. We all get that rush somehow, somewhere. Sometimes it’s in the studio as a mix finally comes together. Sometimes it’s on stage and your audience is feeling what you’re feeling. Sometimes it’s when a song starts to take form.

Anyway, here’s the video:

-Kim.

Acoustic treatment – soundproofing vs absorption

by Kim Lajoie on April 28, 2014

To some people it’s obvious. To many others, it’s a bit more hazy.

Acoustic treatment is not the same as soundproofing.

Not even a little bit. Yet, often I see the terms being used interchangeably. Or one term used when the other’s meaning is intended.

Acoustic treatment is about controlling how sounds behave inside the room. How and where they reflect, which frequencies are absorbed and how effectively, and (if you’re lucky) how the dimensions of the room affect its resonant behaviour. Acoustic treatment is about whether a room sounds lively, echoey, dead, boxy, etc.

Acoustic treatment usually involves controlling the quality of the surfaces – whether they’re hard (reflective) or soft (absorbent) and whether they’re flat (echoey) or curved (diffuse). If you’re lucky, you get to influence the size and shape of the room to control its resonant behaviour.

Soundproofing, on the other hand, is about how much sound gets in or out of the room. It’s about reducing the level of cars or birds or neighbours in your recordings. It’s also about reducing the degree to which your neighbours can hear you.

Soundproofing usually involves making sure all the walls are of a thick and solid construction (i.e. brick or concrete). It also involves stopping all the air gaps where sound can travel in or out of the room.

-Kim.

Expressing joy in music

by Kim Lajoie on April 21, 2014

‘Joy’ in music can refer to feelings of love or hope. This group of emotions are generally characterised by positive, uplifting feelings. In order to convey these positive feelings, focus on a stable musical material, with a tonality that is predominantly major and consonant. High energy is often useful too, but not always necessary.

The stability will provide a sense of comfort and dependability for the listener. A consonant tonality performs a similar role. Both the stability and consonance will allow the major tonality to come through clearly. The energy level will depend on the overall contour of the song, but a high energy level can can also assist in drawing the listener’s attention. A high energy level will also indicate to the listener that a particular section of music is particularly important (which also helps make it more memorable).

-Kim.

Video: Performance vs cleanliness

by Kim Lajoie on April 14, 2014

Well, this was an interesting challenge. Hand-held SM57 for vocals. Trying not to make it sound like trash. There’s a lot of suck at around 7-10kHz. Took it down with EQ and added back some air on top. Used a de-esser to bring the dynamics back into check. Couldn’t do much about the plosives though, guess that’s what happens when you don’t use any foam or anything. The 57 is a pretty trashy mic (if I’m being generous I’ll say it has ‘bite’). But it can be made to work.

Anyway, what’s interesting to me is the bigger story. My experience of making it work reminded me a lot of when I had much cheaper gear (and a lot less of it). In other words, cheaper (or more limited) gear can still get you a result that doesn’t suck, but you’ll work harder for it.

I’ve got much nicer mics, but for this recording it was important for the vocalist to hold the microphone in her hand to deliver a compelling performance (for both audio and video). And I’ll take a compelling performance over a cleaner sound every time.

-Kim.

Using reverb in the mix

by Kim Lajoie on April 7, 2014

Reverb is a tool that’s easily recognised and often overused. Reverb is one of the best tools for enhancing the sense of space and depth in a mix. It works by adding a wash of sound – called the tail – directly after the original sound. This tail usually simulates the kind of sound heard in a large hall. There are many different kinds of reverb – ranging from simulations of small and large physical spaces, to electromechanical reverbs (such as springs and plates), to fantasy reverbs (such as gated reverbs and reverse reverbs).

Reverb is most commonly used on a ‘send’. This is a special kind of mixer channel. Instead of receiving its input audio from the multitrack recording, it receives its signal from the other mixer channels. The amount (level) from each channel is controlled by the ‘send amount’ for each channel. This is a good way to use reverb because it allows for one reverb processor to add its reverberation tail to many channels (sends often aren’t appropriate for channel EQ or compression).

If you’re getting started with reverb, start with a simple hall reverb. Set it up on a send bus, and choose a basic reverb preset (usually the default start-up preset will be a good way to begin). Then send a little bit from each channel to the reverb. A good rule of thumb is to add just enough reverb that you can hear it. Background sounds will normally need more reverb than foreground sounds, and sustained sounds will usually need more reverb than percussive sounds.

Mute the reverb channel and compare the mix with and without reverb. It should sound like the same mix, with the reverb adding subtle space and depth. If the reverb is overpowering, simply reduce the send levels of the more prominent instruments.

If you want to customise the sound of the reverb, you can tailor it to the sound of the mix you’re working on. Each reverb is different, but there are often some common controls:

  • Length (Time) – This is the most obvious control. It allows you to change the length of the
  • reverb tail. Longer reverbs work better for music that’s slow, sparse or abstract. Shorter reverbs are the opposite – they work better for music that is fast, dense or acoustic. Too short and the reverb won’t have much effect. Too long and it’ll make the mix messy and indistinct.
  • Size – Size often works with length. While length adjusts how long the reverb tail is heard, size changes the apparent depth of the reverb. It works similarly to the the size of a physical space – a small room will sound tight and intimate and a larger room or hall will sound deep and spacious. Like length, the right setting will depend on the music. Too small and the reverb won’t have much effect, too large and it’ll sound indistinct.
  • High frequency (HF) damping – This affects the way the high frequencies are processed by the reverb. HF damping reduces the high frequencies being reverberated. Low levels of HF damping will make the reverb sound very ‘live’ – like an empty hall with a lot of hard surfaces. High levels of HF damping will make the reverb sound warmer. Too little HF damping will make the reverb sound airy and obvious. Too much HF damping will make the reverb sound dead or ‘damp’. As with the other controls, the best setting will often be somewhere in the middle, depending on the sound of the mix.
  • Pre-delay – This control inserts a delay before the reverb, making it sound later after the original sound. It can be used to increase the apparent size of the reverb. Because pre- delay separates the reverb from the original sound, it can also add clarity to particularly reverberant mix. This is most useful for vocal-heavy mixes because it allows the vocal to be quite reverberant without reducing its intelligibility

When adjusting reverb parameters, it’s often helpful to solo a single sound. Usually the lead vocal or a sparse drum/percussion part will let you hear the reverb most clearly.

-Kim.

How To Know If You’re Doing A Good Job Mastering

by Kim Lajoie on March 31, 2014

Mastering is often seen as a dark and mysterious art. This is particularly true among junior producers and engineers who want to learn how do do it themselves. There’s a lot of different advice floating around these internets, some of it conflicting. It can be difficult to know if you’re taking the right approach. It can be difficult to know how you can improve.

Short of hiring a teacher or mentor, the best thing to do is be clear about what you’re trying to achieve. And that means understanding the purpose of mastering.

I’ve written quite a lot about mastering here on this blog. Put simply mastering is the process that takes a stereo mixdown that sounds great in the studio and turns it into a stereo audio file that’s appropriate for distribution.

So the question is: how do you know if a stereo audio file is appropriate for distribution?

I approach this in two parts: characteristics of the audio and characteristics of the format.

For the audio to be appropriate for distribution, the two primary factors to consider are tone and level. Fortunately it’s fairly easy to know what to aim for – simply listen to other commercial recordings (in your acoustically treated, calibrated monitoring environment). To adjust your mixdown so that the audio is more appropriate for distribution, your principle tools will be a good equaliser for adjusting tone and a good limiter for controlling crest factor (which gives you freedom in adjusting level).

For the format to be appropriate for distribution, you need to know what how the release will be distributed. For CD duplication, you’ll probably need to author a master disc. For replication, you might need a DDP image. For online distribution, a linear CD-resolution audio file might be sufficient. Or a higher-than-CD-resolution file might be more appropriate. To create these formats you’ll need appropriate authoring tools. Professional CD authoring software is probably necessary if you want to master for CD. For online distribution, a render from your audio software (at the correct resolution and format) might be sufficient. Apple’s mastering tools for their ‘Mastered For iTunes’ program might also be relevant to your interests.

-Kim.

Using EQ for a louder mix

by Kim Lajoie on March 24, 2014

It is particularly in adjusting the tone and dynamics of each sound that the mix engineer controls the loudness of the mix. As you already know, sounds with a lot of upper midrange energy and with relatively flat dynamics have the most loudness. But unlike the composer’s freedom of choosing which notes actually make up the piece of music, the mix engineer’s tools can only modify the sound of the notes that have already been recorded. Fortunately, those tools are varied and powerful.

The most powerful sound-shaping tool available to mix engineers is EQ. This tool alone can make any sound bright or dull, strident or subdued, thin or heavy. And with such a powerful tool comes great responsibility. A mix engineer, like a composer, could quite easily make a loud mix by making every sound brash and strident. Of course, this wouldn’t be very pleasant to listen to.

A more appropriate use of EQ is to make sure each sound has a distinctive character and role in the mix. EQ can make for a louder mix by making sure that each area of the mix – the bottom, the low mids, the upper mids and the top – is clear and focused. Think about which sounds will dominate in those areas and make sure other sounds aren’t competing. This will make it much easier to make a mix loud. On the other hand, a mix that is muddy and indistinct will fight every step of the way to loudness.

You’ll probably also find that the higher the frequencies, the more room there is in the mix. The upper mids in a mix can often accommodate a few distinct prominent sounds. The very top of a mix often needs almost no carving at all. By contrast, the bass region can often only fit one or two different sounds, and the subs can barely fit one. This is why common mixing advice includes high pass filters and lower-mid cuts to increase clarify and space in the lower ranges.

-Kim.