Kim Lajoie's blog

Making good progress on the electronica/metal and the book

by Kim Lajoie on January 11, 2015


So, I’ve got a bunch of projects going on. And lately I’ve been making good progress on two of them.

The electronic/metal project is going well. It’s looking like it’ll be an EP. I’ve worked out a workflow where I compose and record with Maschine and then edit and mix in Cubase (starting using the new version this year).

I’ve had a bit of a love/hate relationship with Maschine over the years. I love the hardware and integration with software. The more I use it, the more I appreciate how well designed it is. The sounds are superb – the included samples, the drum synths and the integration with Massive, Reaktor, etc. And it’s really fast and fun to start working on a track. But each time I start using it, I also get frustrated with a few things. Integration with a DAW feels clumsy because there are two timelines going on, or I have to insert MIDI clips in Cubase to trigger Maschine scenes. And while the pattern-based approach is great for starting a track, it’s got it’s limitations. It’s fiddly to do one-off edits or variations (such as drum fills or buildups). It’s even more fiddly to add or remove individual bars from sections. It really resists melodies that start before the beginning of a section. And while recording live performances works reasonably well, editing them is… Let’s just say you don’t want to do that.

So as much as I’d love to use Maschine for a lot of the vocal electronic/pop work I do with other artists, it just doesn’t fit with my workflow. I usually do preproduction with the artist/vocalist on a keyboard, acoustic piano or guitar and record a demo linearly to a click. Everything else gets built up around that demo recording. Sometimes that includes adding or removing bars or sections. But very often the very first recording for a project is the full length of the song. A lot of the composition is done before getting anywhere near a computer.

This time, however, I’m working differently. I’m doing the composition using Maschine as the instrument (in stand-alone mode, instead of a keyboard or guitar). I’m not working with a vocalist. I can choose to restrict my melodies to exclude anacruses. I’m also recording guitar into Maschine. No, I can’t edit my performances, but it’s my own project so I have the luxury of taking extra time to practice and record as many takes as I need. And I can come up with a well-developed instrumentation and a pretty good skeleton for the song structure. I then render the multitracks and bring them into Cubase, where I can do detailed edits and mix.

I’ve also entered the final stages of writing my book. It’s written primarily for artists – musicians who write, record and perform their own music. It’s about making art that matters and connecting with a supportive audience. The book itself is based on several years of talking to artists every day and helping them with understanding their place in society. The conversations I’ve been having with artists have steadily been becoming more complex and nuanced, and so I started to realise that what I have to say requires a medium with more scope and consideration than a blog post, email or verbal conversation. The book will be fairly substantial – approximately 15,000 words. And I’m aiming to have it released at the start of February. It’s a pretty big undertaking and I’m looking forward to getting it out.


This year has led me to an interesting place

by Kim Lajoie on December 31, 2014

Well, it’s that time of year again.

Most of you reading this will probably have noticed that most of the blog posts this year have been excerpts from my guides (see the menu at top-left of this page). Truth is, I’ve been exceptionally busy with other music work and haven’t had the time or energy to write much new stuff. In no particular order:

  • My production and promotion work has been growing and maturing. I’ve had the good fortune to work with some really impressive artists on some exciting projects. Looking forward to seeing them released over the next few months.
  • I’ve been photographing and reviewing local gigs. This has been fun. Photography is so much like music it’s uncanny. There’s the gear lust, the different genres to specialise in, the importance of skill and technique, the combination of technical and emotional expertise.
  • I’ve even started up a magazine in the last couple of months. This has been a real trip. I’m so grateful for my amazing team. It’s been extremely stressful and equally rewarding. It’s also demanded a huge amount of my focus and energy for the last six months.
  • My studio’s now got two pianos, I’ve been buying more hardware for recording and mixing and almost completely stopped using plugins for mixing (using only Cubase’s built-in effects when needed).
  • My own music project Bare Toes Into Soil has been live on stage a bunch of times and we’ve released a mixtape of remixes and collaborations. I’m exceptionally pleased with the mixtape – sonically it’s probably the best work I’ve done.

It’s great to be able to look back and be proud of what I’ve put my effort into.

But right now, on New Year’s Eve, I feel that I’m in a strange place. I’ve made my whole life about music, and yet right now I feel the least excited about it as I’ve ever been. My other business projects (and there are more coming) have motivated and invigorated me, but I don’t feel a lot of energy for making my own music. Maybe it’s because I’ve been putting my creative focus into other avenues. Maybe I’ve been under too much stress to be as creative as I’d prefer. Maybe I feel some futility in putting so much effort into making music that few people will hear or care about. Maybe it’s all of the above.

I’m also feeling like I need a break from writing. Or maybe I need to take a break from structured writing. I’ve been working on a new book for artists that’ll be released at the end of January. I’ve spent years writing on this blog. I think I’ve probably said about as much as I care for now about composing, producing, recording, mixing, mastering, etc.

So, what next?

  • I’ve got a bunch of projects about to pop out of the pipeline. That’ll all become clearer over the next month. That’s pretty exciting, and definitely my best work so far.
  • I’m not sure what I’ll be doing on the music front. I’d like to continue Bare Toes Into Soil, but I don’t yet know what pace I’ll take it. I’ve also started a new music project – all distorted drum machines and angry guitars. After so many years of atmospheric electronica, I’m feeling that it might be the right time for me to return to some heavier music.
  • This blog will continue, but I think I’ll make it a bit more personal. It doesn’t have to be a technical resource. It doesn’t have to be robotically published every Monday on the dot. I’ll write a bit more about what’s going on in my world. It might be a bit more opinionated. It might veer a little from production talk. It might not be published on Mondays.

Let’s see what happens.




Amazement and anticipation

by Kim Lajoie on December 29, 2014

Amazement is a departure from minor or dissonant tonality of aggression. Instead, high energy and instability are used to create a sense of surprise or wonder. This can be difficult to do well – the new material must be familiar to the listener. The best way to do this is to express ‘amazement’ later in the song, using significant musical material (such as melodies or sounds) that were presented earlier in the song.

Surprise is a more extreme expression of amazement. This is achieved by adding sudden changes to the unstable musical texture – the more sudden, the more surprising. The key is to combine the sudden changes with instability to create musical progressions and structural punctuation that has a high degree of unpredictability.

Anticipation is one of the most difficult emotions to express in music. It is a sense that something is coming – an expectation that something will happen, but an uncertainty of exactly what it will be. The difficulty is in finding the right balance. If the music is too predictable, the anticipation will turn to boredom. The music is too unpredictable, the anticipation will turn to confusion.

Anticipation is often expressed as a low energy section of music directly following a previous high energy section. For extended periods of anticipation, a gradual increase in energy works well to guide the listener’s expectations. Unstable textures also work well. Instability creates a desire in the listener for the song to resolve to stability – especially after prior stability earlier in the song.


Disgust, fear and aggression

by Kim Lajoie on December 15, 2014

In this context, ‘disgust’ doesn’t mean disgusting music… It’s a certain mood evoked by minor tonality, and slightly more energy and less stability than sadness. For example, a lot of late-90s trip-hop falls into this category.

With more energy and less stability, disgust comes across as having more momentum and direction than sadness or despair. In this way, disgust is more ‘active’-sounding, even though it still has mostly low energy overall.

Fear builds on disgust by focussing more on dissonant tonality. The low energy remains, but the instability is ramped up and becomes a prominent feature of the musical texture. While a stable low energy texture can convey calmness and reflection, an unstable low energy texture conveys uncertainty and unease.

Fear can also be used to convey feelings of apprehension. Apprehension is particularly effective when it hints at a previous section of music that was particularly high-energy or even startling.

Aggression takes the minor (or dissonant) tonality and instability of fear, but adds high energy. The high energy works together with instability to add excitement and action to the negative tonality.

Aggression can also be used to convey feelings of anger or violence. It can also be expressed with less stability, but it is less effective. It might be appropriate, however, to express a more stable aggression in a song to contrast with less stable musical textures.


Expressing sadness and serenity in music

by Kim Lajoie on December 1, 2014

Serenity is similar to the ‘joy / love / hope’ group of emotions, except with less emphasis on the happy or uplifting components. Like joy, serenity is best expressed using stable musical material. Unlike joy, however, serenity also comes through best with a low energy level and a gradual rate of change.

The stability provides comfort and dependability, but the low energy level also adds a relaxed element. Rather than being important or demanding, a serene section of music needs to be unobtrusive. An emphasis on gradual change also works here – sudden change will be too jarring (although sometimes it makes sense to have a jarring transition out of a serene section!).

Sadness is similar to serenity in the stability and low energy, but moves towards a predominantly minor tonality. Consonance can also help, but will depend on the overall tonal language of the song. Sadness
can also be used to convey a message or feeling of despair. Despair can work with more dissonant tonalities.

The minor tonality sets the general negative mood of the music. In this case, the stability and low energy set the kind of negative mood – rather than being exiting or jarring, there is a calmness and quiet – almost tranquility. Negative tranquility very easily triggers feelings of loss, aftermath or reflection on past misdeeds.


When (and how) to use a gate or expander

by Kim Lajoie on November 17, 2014

Gating and expansion work similarly to compression. While compressors automatically turn the volume down when the input audio rises above the threshold, gates and expanders automatically turn the volume down when the input audio falls below the threshold.

The simplest example of this is a basic noise gate – it mutes the audio when the instrument isn’t playing. This works when the threshold is set just a little higher than the background noise. When the instrument isn’t playing, the background noise is below the threshold so the gate ‘closes’ – it mutes the audio (turns it all the way down). When the instrument is playing, however, the audio level rises above the threshold and the gate ‘opens’ – letting the audio through.

Gates often have fewer controls than compressors. Some gates have many controls, but almost all have the following:

  • Threshold – This sets the level below which the audio is muted. When the input audio is quieter than this level, the sound will be muted. When the input audio is louder than this level, the sound will pass through.
  • Attack time – This sets the time for the gate to change from closed to open. Usually this should be as fast as possible, but sometimes this can result in a sharp click or unnatural sound when the gate opens. Increasing the attack time results in a softer, smoother sound.
  • Release time – This sets the time for the gate to change from open to closed. Setting this correctly is important for instruments that have a natural decay (such as acoustic guitars or drums). Often the decay can still be heard ‘under’ the background noise, and closing the gate too fast can unnaturally cut off the end of the instrument’s decay. In these cases, the background noise is preventing the threshold from being any lower. Increasing the release time will give the instrument’s decay more time to fully die out before the gate is closed.

Expanders are gentler versions of gates. Instead of muting the audio, they simply reduce the volume. This often sounds more natural and gentle than a gate because the background noise doesn’t come in and out as dramatically. Expanders usually have an extra control that gates don’t – ratio. This sets the degree by which the volume is reduced when the input audio falls below the threshold. Expanders can be more useful for mixes that need to retain a natural ambience – especially acoustic and folk music.

Like compression, don’t assume that since you’ve got a gate or expander that you must use it! Unlike compression, I’d recommend not even trying it for most tracks. Only try it if you have a track that has noticeable background noise that is distracting in between the instrument playing. This can be more noticeable if the track is being compressed, because a compressor can often turn up the background noise in between the instrument playing. If possible, apply the gate before the compressor.

The necessity for gates and expanders is greatly reduced these days because most recording and mixing equipment produces very little background noise and it’s usually easy to record in a quiet enough location.

One exception to this is high-gain guitar amps. The high gain and distortion greatly increases the level of the background noise – sometimes this background noise is almost as loud as the guitar itself. In these cases a gate or expander can be very useful in cleaning up the audio track.


Using compression and saturation to increase loudness

by Kim Lajoie on November 3, 2014

The second-most powerful sound-shaping tool (after EQ) available to mix engineers is compression. This is most commonly used to reduce the dynamic range of a sound. More extreme compression can be used to reduce the crest factor of a sound. Unlike EQ, excessive amounts of compression might not sound unpleasant. Here, it depends on the style of music. A lot of acoustic folk would sound silly with extreme compression. On the other hand, a lot of modern electronic dance music would sound silly without extreme compression. It’s no coincidence that the more important loudness is for a style of music, the more compression is tolerated or even expected.

When using compression to increase loudness, it’s often useful to start with extreme settings and then back off until it sounds natural or acceptable. Usually, this means starting with fast attack and release times, high ratio and low threshold. First increase the release time until distortion is low enough to be acceptable. Then reduce the ratio and/or raise the threshold if you want to retain some of the original dynamics of the sound. There are many different types of compressors, and you might find that even a modest collection provides a wide variety of sounds and colours. The differences between compressors are most apparent at the kinds of extreme settings described above. It’s worthwhile trying different compressors on particular difficult or sensitive sounds such as kick drums.

While EQ and compression alone are sufficient for many styles of music, sometimes mix engineers need to go further. Saturation can be handy here. While there are many, many different kinds of saturation, they all have one purpose (when used deliberately) – to destroy crest factor. For sounds with a high crest factor, peaks are crushed and made noisier. For sounds with a low crest factor, more steady upper harmonics are generated, which increases the energy in the upper-mids. At extremes, saturation sounds like a kind of mild distortion. Broadly speaking tape-style saturation is often softer and smoother than tube saturation, which itself is softer and smoother than native digital clipping. Which style of saturation you use on a sound will largely depend on the nature of the sound, the behaviour of the specific tool you’re using, and the creative direction of the mix. As with compressors, each saturation tool is different and it’s often worthwhile trying different tools on difficult or sensitive sounds.


Tonality in composition

by Kim Lajoie on October 20, 2014

Tonality refers to the harmonic language used in the music. This is about the way notes are chosen and how they’re combined. Tonality is a complex topic, but a good way to approach it is to look at two ways to express tonality – major/minor and consonant/ dissonant.

(The following explanations are deliberately simplistic – intended only as a quick introduction, not a comprehensive discussion of music theory.)

Major tonality is most strongly expressed as the major-third interval from the tonic. For example, if your song is in the key of C, the major-third from C is the note E-natural (white note, with no sharps of flats). So, using a lot of E-natural notes will give your song a strong major feel. If your song is in a different key, the note relationships remain the same. So, if your song’s tonic is F#, the major-third will be the note A#. While the major third is the strongest way to express a major tonality, major-sixth and major-seventh from the tonic also contribute to a major tonality.

Similarly, a minor tonality is most strongly expressed as the minor-third interval from the tonic. For example, if your song is in the key of G minor, the minor-third from G is B-flat. So, using a lot of B-flats in your song will give you a strong minor feel.

Exclusive use of major or minor tonalities can create too stark an effect – like using too much of a single colour. Often, it makes sense to combine major and minor tonalities in varying degrees throughout a song. A more balanced sound can be achieved by using some major chords and some minor chords – even having some song sections predominantly major and other song sections predominantly minor.

Consonant tonality sounds like the harmonic and melodic content is clear and unambiguous. An extreme form of consonance is a musical part where all pitched instruments are playing the same note. Octaves, fifths, fourths and thirds are all quite consonant.

Unlike consonance, dissonant tonality usually sounds crowded and ambiguous. This is usually caused by harmonic combinations that are complex and even clashing. Minor seconds, tritones, and major sevenths can often be combined to create dissonant tonalities.

Like major and minor tonalities, exclusive use of consonance or dissonance can sound too stark. Having some sections that are more consonant and other sections more dissonant is a great way to give your song a subtle sense of ebb and flow.


A basic primer on compression

by Kim Lajoie on October 6, 2014

Compression is a very important tool to a mix engineer. Unlike volume and EQ, however, compression can sometimes be difficult to hear. Where EQ adjusts the tone of the sound, compression adjusts the dynamics.

The simplest way to understand compression is as a process that automatically turns the volume down when the input sound gets too loud (and then turns it back up when the input sound gets quieter again). Basically, compression makes loud sounds quieter.

Typically, compressors will have four main controls:

  • Threshold – This is the sound level which is considered ‘too loud’. When the input sound gets louder than this, it is turned down. When the input sound later drops below this level, it’s turned back up. The lower the threshold, the more compression will occur.
  • Ratio – This is the amount by which the sound is turned down. It’s usually expressed as a ratio (e.g. 2:1) but you don’t need to understand the maths in order to use this. Quite simply, lower ratios (such as 2:1) mean the volume isn’t turned down much and higher ratios (such as 20:1) mean the volume is turned down a lot.
  • Attack – This is the speed at which the volume is turned down. Normally this should be pretty fast (low numbers). If the attack is too fast, however, sometimes the sound can become too soft or even distorted. A slower attack can make the compression more gentle, but if the attack is too slow the compression will be ineffective.
  • Release – This is the speed at which the volume is turned back up when the input sound level drops back below the threshold. Lower values (fast release) will make the compression more audible. High values (slow release) will make the compression smoother. Very high values will make the compression almost inaudible.

Compressors can be very versatile tools, and some have a distinctive sound (behaviour) of their own. As a starting point, try these approaches:

  • First, not all sounds need compression. Try compression, but don’t be afraid to go without if it’s not actually improving the sound.
  • For smooth compression on melodic instruments (such as vocals or other acoustic instruments), start with a
  • low ratio and a threshold set so that the compressor is active most of the time. Set the attack as fast as you can without distortion and set the release to a medium speed. To make the compression stronger and tighter, raise the ratio. To make the compression smoother and gentler, increase the release time.
  • For tight control, use a high ratio and a low threshold (similar to above – so that the compressor is active most of the time). Use the fastest attack and release times you can get away with (without getting distortion or other strange sounds).
  • For punchy drums, use a longer attack time and medium release time. Make sure the threshold is set high enough that the drum hits well above the threshold but quickly drops below it. Higher ratios produce more extreme effects. Longer attack times will add more of the initial ‘thwack’ (the transient). The release time will have to be tuned by ear until it works with the length of the drum decay.


Considerations when choosing sounds for loudness

by Kim Lajoie on September 22, 2014

At its simplest, composition is the process of choosing sounds and arranging them in time. This process might vary depending on what kind of music you’re making, what instruments you’re using, how many people are involved, etc… but the fundamentals of composition are the same for everyone.

When choosing sounds for loudness, you have to understand what kinds of sounds and instruments sound loud. When arranging sounds for loudness, you’ll have to understand how to combine sounds in ways that maximise the desired effect. As discussed earlier, there are two fundamental attributes of sound relevant the way we perceive loudness – length and frequency.

For sounds of equal recorded volume level, longer sounds are generally perceived as louder than short sounds. The effect isn’t linear, however. It’s true for very short sounds (i.e. less than about 500ms). For sounds longer than about 500ms, however, additional length doesn’t sound louder. You know this yourself – if you have a snare drum and an organ in your song and they’re both hitting the same peak level on the meters, the organ will sound much louder than the snare drum. That’s because the snare drum is very short and the organ notes are much longer. The effect only works for short sounds though – an organ note that lasts four beats will sound just as loud as an organ note that lasts eight beats.

It’s a similar story for frequency. Again, you know this from experience. If you have an instrument where all notes hit the same levels on the meters (such as an organ or a synth with an open filter), you’ll know that in the mid to upper-mid range (e.g. around middle C and above), these notes sound louder than notes in the bass (e.g. a couple of octaves below middle C).