Kim Lajoie's blog

Tonality in composition

by Kim Lajoie on October 20, 2014

Tonality refers to the harmonic language used in the music. This is about the way notes are chosen and how they’re combined. Tonality is a complex topic, but a good way to approach it is to look at two ways to express tonality – major/minor and consonant/ dissonant.

(The following explanations are deliberately simplistic – intended only as a quick introduction, not a comprehensive discussion of music theory.)

Major tonality is most strongly expressed as the major-third interval from the tonic. For example, if your song is in the key of C, the major-third from C is the note E-natural (white note, with no sharps of flats). So, using a lot of E-natural notes will give your song a strong major feel. If your song is in a different key, the note relationships remain the same. So, if your song’s tonic is F#, the major-third will be the note A#. While the major third is the strongest way to express a major tonality, major-sixth and major-seventh from the tonic also contribute to a major tonality.

Similarly, a minor tonality is most strongly expressed as the minor-third interval from the tonic. For example, if your song is in the key of G minor, the minor-third from G is B-flat. So, using a lot of B-flats in your song will give you a strong minor feel.

Exclusive use of major or minor tonalities can create too stark an effect – like using too much of a single colour. Often, it makes sense to combine major and minor tonalities in varying degrees throughout a song. A more balanced sound can be achieved by using some major chords and some minor chords – even having some song sections predominantly major and other song sections predominantly minor.

Consonant tonality sounds like the harmonic and melodic content is clear and unambiguous. An extreme form of consonance is a musical part where all pitched instruments are playing the same note. Octaves, fifths, fourths and thirds are all quite consonant.

Unlike consonance, dissonant tonality usually sounds crowded and ambiguous. This is usually caused by harmonic combinations that are complex and even clashing. Minor seconds, tritones, and major sevenths can often be combined to create dissonant tonalities.

Like major and minor tonalities, exclusive use of consonance or dissonance can sound too stark. Having some sections that are more consonant and other sections more dissonant is a great way to give your song a subtle sense of ebb and flow.

-Kim.

A basic primer on compression

by Kim Lajoie on October 6, 2014

Compression is a very important tool to a mix engineer. Unlike volume and EQ, however, compression can sometimes be difficult to hear. Where EQ adjusts the tone of the sound, compression adjusts the dynamics.

The simplest way to understand compression is as a process that automatically turns the volume down when the input sound gets too loud (and then turns it back up when the input sound gets quieter again). Basically, compression makes loud sounds quieter.

Typically, compressors will have four main controls:

  • Threshold – This is the sound level which is considered ‘too loud’. When the input sound gets louder than this, it is turned down. When the input sound later drops below this level, it’s turned back up. The lower the threshold, the more compression will occur.
  • Ratio – This is the amount by which the sound is turned down. It’s usually expressed as a ratio (e.g. 2:1) but you don’t need to understand the maths in order to use this. Quite simply, lower ratios (such as 2:1) mean the volume isn’t turned down much and higher ratios (such as 20:1) mean the volume is turned down a lot.
  • Attack – This is the speed at which the volume is turned down. Normally this should be pretty fast (low numbers). If the attack is too fast, however, sometimes the sound can become too soft or even distorted. A slower attack can make the compression more gentle, but if the attack is too slow the compression will be ineffective.
  • Release – This is the speed at which the volume is turned back up when the input sound level drops back below the threshold. Lower values (fast release) will make the compression more audible. High values (slow release) will make the compression smoother. Very high values will make the compression almost inaudible.

Compressors can be very versatile tools, and some have a distinctive sound (behaviour) of their own. As a starting point, try these approaches:

  • First, not all sounds need compression. Try compression, but don’t be afraid to go without if it’s not actually improving the sound.
  • For smooth compression on melodic instruments (such as vocals or other acoustic instruments), start with a
  • low ratio and a threshold set so that the compressor is active most of the time. Set the attack as fast as you can without distortion and set the release to a medium speed. To make the compression stronger and tighter, raise the ratio. To make the compression smoother and gentler, increase the release time.
  • For tight control, use a high ratio and a low threshold (similar to above – so that the compressor is active most of the time). Use the fastest attack and release times you can get away with (without getting distortion or other strange sounds).
  • For punchy drums, use a longer attack time and medium release time. Make sure the threshold is set high enough that the drum hits well above the threshold but quickly drops below it. Higher ratios produce more extreme effects. Longer attack times will add more of the initial ‘thwack’ (the transient). The release time will have to be tuned by ear until it works with the length of the drum decay.

-Kim.

Considerations when choosing sounds for loudness

by Kim Lajoie on September 22, 2014

At its simplest, composition is the process of choosing sounds and arranging them in time. This process might vary depending on what kind of music you’re making, what instruments you’re using, how many people are involved, etc… but the fundamentals of composition are the same for everyone.

When choosing sounds for loudness, you have to understand what kinds of sounds and instruments sound loud. When arranging sounds for loudness, you’ll have to understand how to combine sounds in ways that maximise the desired effect. As discussed earlier, there are two fundamental attributes of sound relevant the way we perceive loudness – length and frequency.

For sounds of equal recorded volume level, longer sounds are generally perceived as louder than short sounds. The effect isn’t linear, however. It’s true for very short sounds (i.e. less than about 500ms). For sounds longer than about 500ms, however, additional length doesn’t sound louder. You know this yourself – if you have a snare drum and an organ in your song and they’re both hitting the same peak level on the meters, the organ will sound much louder than the snare drum. That’s because the snare drum is very short and the organ notes are much longer. The effect only works for short sounds though – an organ note that lasts four beats will sound just as loud as an organ note that lasts eight beats.

It’s a similar story for frequency. Again, you know this from experience. If you have an instrument where all notes hit the same levels on the meters (such as an organ or a synth with an open filter), you’ll know that in the mid to upper-mid range (e.g. around middle C and above), these notes sound louder than notes in the bass (e.g. a couple of octaves below middle C).

-Kim.

Rate of change

by Kim Lajoie on September 8, 2014

Rate of change can be understood along the continuum between sudden change and gradual change. Rate of change in music refers to the way the music moves from one section to another. More broadly, it refers to the breadth and depth of the changes in a piece of music.

Sudden change is what happens when there is a large amount of change in a short space of time. This change can be across any musical parameters: Pitch, harmony, rhythm, density, texture, volume, etc. The bigger the change and the shorter the transition, the more sudden the effect is.

Gradual change is the opposite. This is what happens when there is a small change or a long transition. Any sudden change can be made softer by either reducing the difference between the ‘before’ and ‘after’ sections or by making the transition time longer.

-Kim.

Click tracks

by Kim Lajoie on September 1, 2014

The debate about click tracks has always raised passionate responses. Are they killing music? Do only really overproduced artists use them? Or are they just like vegetables – really useful, healthy and important but totally bland?

If you’re new to this, a click track is an electronic metronome helping artists to keep time while recording their instruments. The click track lines up with the timing grid in your recording software. It helps you see very clearly if the musicians are playing on time.

Recording to a click track can be very helpful to you need to do a lot of post-production editing and overdubbing. If you need to adjust the timing, a click track helps locate where the notes aren’t aligning together on the grid. Click tracks can rescue you from having to spend more time and money on re-recording tracks if you make a lot of mistakes.

Having said that, recording to a click track can seriously kill the mood. Nothing in life, art or music is perfect, so small deviations in tempo shouldn’t really cause you too much grief. If you have good musicians who practice, listen to each other and generally make amazing music together, you won’t be relying on a click track to fix up timing mistakes later on.

-Kim.

Put your sounds into an upside-down triangle.

by Kim Lajoie on August 25, 2014

Think of all the sounds in a mix being contained in a triangle with one point facing the listener.

I usually draw it as an upside-down triangle, with the listener at the bottom.

The louder (closer to the listener) a sound can be, the fewer sounds can fit alongside it. The quieter (further away from the listener) a sound can be, the more sounds can fit alongside it. If all sounds must be equally loud, then they all end up far from the listener.

If one sound is close to the listener and all the other sounds are in the background, the mix can seem stark. If sounds can be spread around the triangle, with a few sounds close by and most sounds further back, the listener will experience an engaging and deep sound stage.

-Kim.

Why I don’t worry about bleed

by Kim Lajoie on August 18, 2014

It’s a fact of recording studio life – bleed happens.

‘Bleed’ is the residual sound picked up by microphones placed around the studio to capture multiple instruments. For example, it happens when a microphone placed next to an acoustic guitar also records sounds from vocalists and other instruments being played close by. Many producers and engineers believe bleed is something to be minimised and removed as much as possible. With pesky bleed in the way, it can be much harder to perform magic tricks like overdubbing and editing later on. Common strategies to reduce bleed include putting up sound barriers up between instruments, positioning microphones very close to the instrument or simply recording instrument tracks one at a time.

These strategies can have some unintentional consequences though. Putting up sound barriers can kill the vibe of a band playing together. Positioning microphones too close to instruments can also produce an exaggerated and unnatural sound on playback. And recording your instruments one at a time? Sure, you’ll have no problems with bleed, but if you’re recording a bunch of great musicians you might be killing the vibe unnecessarily. They rehearse and perform together, what do you think will happen when you make them play their parts individually one at a time? If you’re recording musicians who aren’t that great, you probably need to record them separately anyway.

Don’t worry about bleed. Good musicians who know what they’re doing don’t need magic tricks to fix up mistakes. Just play good music and the rest will take of itself.

-Kim.

How loudness is measured

by Kim Lajoie on August 11, 2014

The meters on your DAW channels or your mic preamps aren’t telling you the whole story.

When sounds are recorded, the microphone captures the continuous vibrations in the air and creates a continuously varying electrical signal that mimics the vibrations. Louder sounds have wider/stronger vibrations. When you have level meters on your gear, it usually shows you the ‘peak’ level – the strength of the electrical signal created by the microphone. Or the strength of the electrical signal that will be turned into air vibrations by your speakers. (Digital meters work much the same way – they just measure the digital signal, which is just a numerical representation of an electrical signal)

The trouble is, the peak level doesn’t exactly represent how loud we perceive the sound.

One of the (several) ways in which our perception differs from ‘reality’ is that we don’t hear extremely short sounds as loudly as longer steady sounds. Some level meters compensate for that (to show us a more accurate representation of how we hear) by slowing down the meter. By making the meter more sluggish, it doesn’t react as strongly to quick changes (short, sharp sounds) but still reacts strongly to longer steady sounds. This is often referred to as the ‘average’ or ‘RMS’ level (RMS stands for ‘Root Mean Square’ – a mathematical way to measure the signal slowly).

The crest factor of a sound is the difference between the peak level and the RMS level. Sounds with a high crest factor typically have a lot of short sharp peaks (e.g. a drum kit). For these sounds, a peak meter would show a high level but an RMS meter would show a much lower level. Sounds with a low crest factor are the opposite – they have fewer or lower peaks, or no peaks at all (e.g. an organ). For these sounds, a peak meter would show a similar level to an RMS meter.

-Kim.

More about Mid/Side EQ

by Kim Lajoie on July 14, 2014

Mid/side processing is a different way of processing two audio channels. Most processors modify a stereo sound by applying the same modification to the right and left channel simultaneously. Some processors can have different settings for the right and left channels. Mid/side processors, however, work on the ‘mid’ and ‘side’ channel instead of left and right.

Two-channel stereo (left/right) audio can be transformed into two channel mid/side (and back to stereo) without damaging the audio. It’s a completely transparent (and reversible) process.

The mid channel contains all the audio that is common between the left and right channels. This includes mono sounds that are panned centre and the ‘central’ sound in stereo sounds. The side channel contains all the audio that is different between the left and right channels. This generally consists mainly of ambience (either natural room sound or artificial reverb) and any sounds that are hard-panned.

By adjusting the levels of the mid or side channels independently, the stereo width of the audio can be modified in a clean and natural way.

Interesting things happen when you start applying EQ adjustments to the mid and side channels independently. This allows the stereo width to be widened or narrowed (or even completely collapsed to mono) in different parts of the frequency spectrum. This is particularly useful for complex stereo audio, such as groups, the mix bus or mastering.

For some practical tips, see this post.

-Kim.

What is sidechain compression?

by Kim Lajoie on June 30, 2014

Sidechain compression is a special variant of regular channel compression. A normal compressor adjusts the output level of the audio based on the input level. Sidechain compression, however, adjusts the output level of the audio based on the level of a different audio channel.

This means that the volume of a channel reacts to the volume of another channel. The audio that the compressor is reacting to is often referred to as the ‘key’ or the ‘sidechain’.

There are two common uses for this:

  • Kick drum ducking. This technique uses the kick drum for the sidechain signal. It’s set up so that the compressed channel (usually the bass) is briefly turned down when the kick drum is sounding. It was originally used to make the kick drum bigger – by reducing the level of some other tracks (usually the bassline), the kick punches through the mix with relatively more presence and power. It’s most commonly used to compress the bassline (either bass synth or bass guitar), but is also used to compress synth pads, vocals or even other drum and percussion tracks. It’s become a recognisable and characteristic sound in a lot of electronic dance music.
  • Vocal ducking. This technique uses the main vocal channel as the sidechain audio. It’s set up so that the compressed channel is turned down when the main vocal is sounding. It was originally used in radio broadcast so that the music would be automatically turned down when the announcer or DJ started speaking. It can useful when mixing a song that contains a prominent foreground part (such as a guitar or vocal harmonies) that should be pushed to the background when the lead vocals come in. Ideally, however, this situation is best avoided by careful composition and arrangement.

In day-to-day mixing, there’s usually not much need to use sidechain compression unless you’re aiming to create a certain effect such as a pumping bassline for a dance song.

-Kim.