Kim Lajoie's blog

And now, a full song

by Kim Lajoie on July 24, 2015

Well, how about that. It worked out pretty well, considering.

So, Melissa heard the instrumental jam from three weeks ago and offered to add some vocals. She brought some lyric ideas to the first session and we workshopped the song structure. She then spent a couple of weeks practicing and preparing for the recording. The second session we rehearsed for a couple of hours, then started recording. What you’re watching is the fifth take.

We also tried a few ideas. The Kaoss pad was a heap of fun. I set it to a pretty basic delay program and Melissa – having never performed with one before – had a ball scrunching and warping her vocals! In future I’d love to use the KP in a more detailed way – particularly using it as a four-channel looper and perhaps incorporating some of the more oddball effects.

The vocoder toward the end was a first as well. I’m really keen to use the Ultranova’s vocoder for this music, but I think there’s a long way to go. Next time I think I’ll program a patch from scratch, rather than just pick a preset. Also, I’ll try to make the carrier notes actually match the chords, rather than just sit on the tonic. 😉

Confession time – this is actually the second song and second vocalist I’ve brought onto this project. The first one we recorded, but I haven’t published because I want to publish a short interview alongside it (which is scheduled for next week). That first one sounded great in the room, but on playback I realised that it’s much too dry! I think I’ve addressed that for this second song, with a bit more delay behind the vocals and synths. It’s fascinating how the sound in the room translates to sound on the recording. I actually thought I’d have more trouble with the vocal level, considering that in the room I was hearing the acoustic sound alongside the processed (and to-be-recorded) sound.

Well, this is why we iterate.


Well, I made another funk

by Kim Lajoie on July 14, 2015


Not really a full song yet, but this time around I put some more thought into some kind of definitive beginning and ending. Some more buildups and breakdowns would do well, but in the absence of an actual song, I feel that it’s more like a jam. Also, it’s pretty clear I need to start thinking about the mix. The mix you’re hearing is very rough – not much more than basic levels and EQ. And while it’s not obvious in the video, I need to start paying some attention to gain staging too. Maschine was outputting very hot (as is the default), which ended up clipping my external recorder the first few times around. I got it sorted this time, but adding in live vocals is going to make it much more important.

So what went well? I’m quite happy with the sound palate that I’m exploring here. Native Instruments finally fixed the problem of Quantise affecting CC data (stupid stupid stupid!), so I think next time I won’t have to be manually quantising each part after I record it. I also remembered to send the click to my headphones so it wouldn’t be recorded. That’s nice. I’m also getting the hang of doing scene/pattern/pad changes on the fly.

So what’s next? I’ve got a couple of vocalists lined up, and a few more who are keen to get in on it. It’s a whole new level of complexity, but it’ll be fun.


Another video – inching toward a full song

by Kim Lajoie on June 28, 2015

No, it’s not a full song. But it’s getting closer.

The style is a lot closer to what I’m developing over my next creative period. No longer is the deep atmospheric lumber. That was fun while it lasted. But now I’m taking a turn toward a kind of electro/funk (and eventually pop). Upbeat pop tempo, heavy syncopation, adventurous harmony. Tight crisp drums, enormous synth bass, giddy keyboard chords. And horns (including cheesy brass hits). And maybe some kitschy hip hop samples too.

Anyway, this is also a compositional development on from the previous video. The hardest part was performing the first scene change around 0:48-1:01. In the space of four bars, I had to create a new scene, select the previous drum pattern for that scene and select the polysynth sound again. And then while the new scene was rolling (and I was playing four-note chords with my right hand) I had to create a new pattern, extend it out to four bars, and then arm for recording. With my left hand, too.

Beyond that, I didn’t think much about structure. Each scene loops about four or eight times before switching around. That’s the next challenge – creating a song structure for the performance. I’ll have to have a think about how to do that. It’ll mean having about 4-5 different sections in the performances (intro, verses, choruses, bridge, etc). And obviously I’m limited to recording one part at a time (or am I?). So I’ll have to have a think about how to do it in a way that is multilayered enough to be interesting, but not so much that it’s tedious to watch me add all the layers and lose momentum.

And wouldn’t it be cool to include some live vocals too?


I made a YouTube video of me playing a YouTube video. And sampling it.

by Kim Lajoie on June 14, 2015

So, first of all, this is not a compositional masterpiece. It’s four bars with some funky guitar hits and seventh chords. If you want to know more about Glasfrosch, I reviewed them here and you can read more about them here.

You might be mildly interested to have a peek into how I sample and sequence using Maschine. Yes, it’s an on-again-off-again relationship, but right now we’re doing pretty well. What you’re seeing is my ‘B-Studio’; it’s a semi-temporary writing setup in my main recording space. At the moment it’s pretty much just a MacBook Pro, Maschine, an Ultranova and a few other bits and pieces that drift in and out.

To me, however, this is primarily a proof-of-concept. Not of the music, but of the workflow and multi-cam video production. In this case, the music isn’t terribly exciting, but that’s the point – so I could focus on the other aspects.

I’ve been watching Fact Magazine’s excellent Against The Clock series, and I’ve been thinking about live performance on video. In particular, I’ve been thinking about what’s interesting and engaging to watch. Not all parts of the production process are equally engaging. You probably wouldn’t want to spend several minutes watching me scroll through kick drum samples. You probably wouldn’t want to watch me try out a bunch of chord progressions that don’t work. But it is more interesting to see me find and chop a sample. And play the drum and keyboard parts.

Yes, it’s a live performance. And yes, it’s prepared and rehearsed. But unlike more traditional forms of live performance, there isn’t a clear-cut answer to the question of what should be prepared and what should be performed. You could have watched me audition drum kits. Or search a bunch of YouTube videos to find one worth sampling. Or I could have pre-recorded the drum patterns and/or keyboard parts, and simply triggered or unmuted them on camera. This one was all in-the-box, but I could have included some outboard analogue stuff. Or plugins. Or acoustic instruments. It’s interesting to step outside the music production ‘zone’ and think about how it looks to others.

Eventually, I’d like to use the format to demonstrate how I work with artists. And that poses similar questions, but on a bigger scale. How much of the process is interesting to watch? What’s the right balance of showing creation vs performance? Do you want a straight-up live-in-the-studio performance of something we prepared earlier, or do you want to see the song grow through each stage of lyric writing, structure, chords, groove, instrumentation, recording, editing, mixing, etc?

I’m looking forward to doing some more experiments.


How to know if you’re any good yet

by Kim Lajoie on May 27, 2015

See, here’s an interesting observation. I’m obviously not using enough EQ.

There’s a stereotype in our industry of the delusional artist/producer – the person who thinks they’re much better (and deserving of success) than they really are.

Interesting, I hardly ever meet these people. In fact, I often come across the opposite – people who think they’re much worse than they really are. They most often come to me asking for assistance with mixing. Which, on the face of it, is pretty understandable – they might have been mixing for a couple of years, and I’ve been doing it for a couple of decades. But the way they talk about their mixes (prior to my listening to them) makes me brace for something nigh on unlistenable. And I’m pleasantly surprised when I hear something that, instead, is nigh on releasable. It’s almost certainly better than some other commercial releases I’ve heard. Once day I should present some examples of songs that are successful or popular despite having a terrible mix.

So I started to think about some of the ways that I know my mixes (or any other musical work, really) are good enough. They’re not that surprising, so I won’t bore you.

  1. Listen to your intuition. Trust yourself. Do you like your own work? Does it make you bop your head? Does it make you jump up and dance? I think this is absolutely necessary. You must like your own work. Of course, you can have a terrible mix that you love (happens to me too), but I don’t think you can have a great mix that you hate. You need to like your mix first. If you don’t, then that’s when you need to switch to your analytical mind – work out what’s working and what needs fixing. Sometimes this might mean stripping everything back and starting from scratch. For me, it’s usually the drums – if the drums aren’t happening, the rest of the mix just won’t come together. If I mute everything but the drums, they need to sound right before I add anything else. In my early years I used to joke that ‘If the mix sucks, the drums aren’t big enough’. Obviously now I take a more nuanced approach, but the sentiment is the same. I need to like the drums before I can like the whole mix.
  2. Listen to similar music. These days I have a habit of listening to commercial reference tracks while I’m setting up a mix session. Right at the start of the session, I’m usually importing audio, renaming tracks, organising track folders and groups, trimming audio files, etc. Hardly any of these tasks actually need me to hear anything. So for those ten-fifteen minutes or so, I’ll have some reference tracks playing – at the same volume through the same monitoring environment as I’ll be mixing. It’s a great way to calibrate my ears, and I find that when the mix is about 90% finished and I have a listen to my references again, I’m much closer than I expected to be. A similar approach should also work if your focus is composition, sound design, recording, etc.

I didn’t think you’d be surprised.

The more interesting question is this: Why do you think your mixes aren’t good enough?

Maybe it’s because you listened to your references and they all have a certain je ne sais quoi that you can’t quite identify or pin down. And maybe they do. Maybe your hearing isn’t refined enough to accurately analysis and identify everything that’s happening in that mix. And that’s fine. You just need to spend a few more decades mixing and listening critically.

But it might not be that. Consider that you are hearing the end result of someone else’s work. And often, you are hearing only the end result. By contrast, you have heard your own work at every step of the way – from the raw recordings or presets or naked oscillators. You’ve heard every experiment and explored every cul de sac. And you’re hearing all this when you listen to the final mix, whether you like it your not.

It’s not a fair comparison, and the unfairness has nothing to do with the listening experience of someone uninvolved in the production process (i.e. your audience). No-one hears your own work like you do. And the sooner you accept your bias, the sooner you can work at counterbalancing it with more focussed objective listening. And less reliance on your intuition. Make no mistake – your intuiting is very good for determining whether your work stands on its own. But it’s terrible for determining how it stands in comparison to others.


Just quickly record some vocals?

by Kim Lajoie on April 24, 2015

 So, I was recently asked about recording some vocals for a song. And so I had the opportunity to describe the process of finishing a song and the various factors that determine how long the process takes.

1. Record vocals. – This might mean just running through the whole song a few times, or it might mean doing each section individually. It could be just a few takes, or it could be 20+ (usually if it’s taking more than about twenty, I tell the vocalist to go home and come back next week).

2. Edit the vocals. – This means choosing the best sections of each take. Again, this could be simply confirming that the last full-length take was the best, or it could mean going through the song word-by-word and auditioning every take to determine the best one. – Sometimes this also means adjusting the pitch (and occasionally timing) of the vocals. It could mean applying some gentle automatic correction across the whole song, or manually correcting a few words here and there, or forensically adjusting every single syllable in the whole song.

3. Mix the song. – This means adjusting the balance between all the instruments in the song. If your backing track is simply a stereo mixdown, then this stage will be very quick – just controlling the tone and dynamics of the vocals to blend with the rest of the instruments. However, if the backing track isn’t mixed well, there won’t be much I can do to blend the vocals in – it’ll sound like the vocals are separate to the rest of the track. If you have all the instruments as multitracks (one audio file per instrument), then I can make sure the whole balance of the song sounds great, but obviously we’re talking about a full mix, which will take a bit longer.

4. Master the song – This means making sure the mixdown (which sounds great in the studio), will sound great everywhere else. That means adjusting the overall level, dynamics and tonal balance so that it is comparable to other commercially released songs.



Enter the iPad

by Kim Lajoie on April 10, 2015

 So, this is interesting.

In my quest to simplify my computer setup I’ve been reducing my plugins and other software to bare minimum. Nowadays I’m almost running pure Cubase 8 (fortunately it comes with some great stuff built in). And I’ve been adding more hardware – mainly EQ and compression for some different flavours and to save time by getting the sounds about 80% right upon recording.

As an aside, it’s pretty funny to hear about ‘mix as you go’ being some kind of new technique brought about by electronic musicians who compose/record and mix iteratively, rather than in separate steps. If you’re recording live instruments and you have any choice at all about the room, instrument position, mic choice and mic position (let alone outboard processing on the way in), then you’re already shaping the sound with an ear for the mix. These choices affect the tone of the sound just as an electronic musician might apply EQ or reverb to sounds as they build up the layers of their song.

Anyway, so I’ve been moving more and more of my mix processing outside the box. But occasionally I’ve felt the need for something a bit different, a bit off-the-wall. But it doesn’t seem to make sense to install a new plugin for the sake of a single project (or even a single song). I’m thinking about the long-term health of my computer here. I used to do this and ended up with dozens of plugins I’d hardly used (and, in truth, many were easily enough replaced by stock Cubase processors).

So, enter the iPad.

Is this the loosely-coupled multi-purpose processor with quasi-disposable software modules that I’ve been dreaming of? Maybe. So I got myself a cheap 2/2 line audio interface, dug up an old USB-MIDI interface (luckily class-compliant), patched them behind my rack, and now my iPad fits in just like any other outboard gear. And I’ve been experimenting a bit. This setup seems ideal for stereo effects processors (such as Flux and Amplitude), as well an monotimbral synths (such as Launchkey and Thor). I’m sure there are plenty of other interesting apps waiting to be tested. The iPad doesn’t quite seem ready to be a multitimbral sound module (and there’s certainly nothing as sophisticated as HALion Sonic 2, which is my default sound source for most things). But surely that’ll change soon. Maybe something like GarageBand or Beatmaker can already operate as multitimbral modules? And after well over a decade of VSTi, do we really need to return to the days of having to manage MIDI timing slew for high-polyphony external modules? And is this any different to having a MacBook as a separate sound module?

So, I’ve got some interesting exploring ahead. :-)


The importance of physical proximity

by Kim Lajoie on March 11, 2015

FullSizeRender 2

Let’s talk about being close with your artist. Like, really close. Like in the same room together.

I recently had a couple of interesting experiences.

One of my previous artists approached me to produce her next release. We’d worked together before, and it’s been one of the best working relationships I’d had with an artist. The songs were great, she was clear in her creative direction and was exceptionally pleased with my work (that included arranging and performing all the non-vocal parts). With these new songs, she wanted to try giving me her demo and reference tracks and letting me develop the songs without her attendance. While I normally don’t do that kind of thing, we agreed to do it. If producing remotely was going to work with any artist, it was going to work with her.

Well, it didn’t take long to get bogged down. There are some kinds of conversations that are very easy to have in person but almost impossible to have in writing. Discussions about creative direction is almost always like this. It’s not something that can be communicated in a one-directional way. We have to request clarification. We have to test each other’s understanding. We have to play audio examples (and sometimes sing or play along). We have to try out different ideas and then talk about them.

The second interesting experience was an unrelated discussion I had with a friend of mine who is also a producer and mix engineer. He mentioned that he doesn’t allow his client to attend postproduction – including vocal comping, mixing and mastering. Everyone’s got their own preferences, but it caused me to reflect on my own approach. I wouldn’t dream of comping a vocal or mixing an artist’s song without including him/her in the process. Every singer I work with has opinions about which parts of each take they want to use. Every artist I work with has opinions about the mix balance. Having them there as I work ensures that they can voice their opinions (and we can discuss them if necessary) as I’m working. It means we can get it right the first time (I almost never get revision requests).

Doing that work on my own seems like a really good way to waste everyone’s time going back and forth with revisions. Or a really good way to leave the artist unsatisfied with a product they’d be happier with if they’d been part of the process.

Producing and engineering isn’t a dark art. It’s not magic. It’s having the right tools and expertise.

The more involved the artist is, the better result they’ll get.


P.S. If you disagree with that last statement, you’re grossly underestimating your artist’s ability to learn about and appreciate the production process. Of course, not everyone’s an expert. And I’ve had my fair share of dumb requests from artists who didn’t know better. But part of my job is to educate and inform artists to help them make better creative decisions.

How not to be a producer

by Kim Lajoie on February 14, 2015

So, I came across this gem last night. And isn’t it just amazing. This is an excellent example of how not to be a producer.

The producer and the singer are meant to be collaborating on writing a new song and demonstrate Ableton Push. They hadn’t met each other prior to the session, and they hadn’t prepared anything beforehand. So, they’re both being put on the spot, and we get to watch the creative process.

So far so interesting.

Except this guy is meant to be a producer. He’s introduced as a ‘professional producer’. Not ‘some guy who makes beats’. He’s a producer. And yet:

  • He doesn’t discuss the creative direction of the track with the singer – even basics like tempo, vibe, instrumentation, etc. He just goes ahead and builds a track that he likes.
  • He starts the session with twenty minutes of no musical communication or collaboration. The singer uses that time to start writing some lyrics, but she’s essentially on her own for those twenty minutes. During this time he gets stuck into subtle adjustments (such as parallel compression, groove nudging, effect automation).
  • Even when vocal recording begins, there’s still almost no collaboration – He doesn’t provide feedback on her lyrics, melody or vocal performance. Nor does he invite feedback from her about the instrumental part. They don’t contribute to each other’s creative work.

If he were ‘just’ a musician, it’d still be pretty disappointing. Can you imagine writing a song with a guitarist, and he spends the first twenty minutes fiddling with his pickup/amp settings? And then he says he’s come up with three chords and asks you what ideas you came up with? Sounds like amateur hour.

But he’s more than a musician. He’s a producer. The title ‘producer’ has many meanings, but ultimately it’s someone who has much more responsibility than a musician in making a recording happen. The producer is running the show. And in this video, he certainly is running the show. He’s just doing a pretty poor job of it.

If you’re a producer, your priority should be enhancing the creative output of your artists and musicians. Find out what creative direction they have in mind – really try to understand their taste and style, work with them. Capture the lightning – try to work as fast as they do, minimise the time they spend waiting around. Raise them to new heights – use your skills and experience to improve their songwriting and performance (while also being appropriately sensitive).


Always remember the emotional connection

by Kim Lajoie on February 9, 2015

Why do you make music?

It’s probably not the fame and fortune (well, at least not the fortune). It’s not the stable income or cosy retirement. I hope it’s not because your parents told you to do it.

I’ll bet you’re making music because you love it. You love music, and creating your own music is a logical extension of expressing that love.

But what does that mean – to ‘love’ music? In this context, what is love? It’s not the love you have for another human being – a partner, a parent, a sibling, a child.

Music is a different kind of love. It resonates with us. When we hear music we love, we feel something amazing. It’s not easy to describe, and not everyone feels it. But chances are if you’re reading this, you know what I’m talking about.

Of course, music isn’t all feelings. We connect with music on an intellectual level too. As creators ourselves, we are constantly dissecting and analysing the music we listen to. We are trying to understand how someone else made their music so great so we can figure out how to replicate it. Or we’re trying to understand how someone else made their music so terrible so we can figure out how to avoid it. As creators ourselves, we probably connect with music more intellectually than most people. So much so that it’s sometimes easy to lose perspective and get lost in the mechanics of making music.

It’s important to always remember the emotional connection. Remember how good it feels to connect with music! It’s glorious – don’t ever lose that feeling!

And don’t let your audience lose that feeling either.