‘Bouncing’ to audio is a process of rendering realtime generated audio to audio files. Typically, ‘realtime generated audio’ is software synthesisers, samplers, hardware sound generators, or even audio files being processed by plugins or hardware effects processors. After bouncing, these audio sources are turned into audio files on your hard drive. The audio files are a snapshot of how those sources sound – the same way a tape recording is a snapshot of a performance.
There are a number of different terms for this. Often you’ll see it referred to as ‘rendering’ or ‘exporting’, or even ‘loopback recording’. The term ‘bouncing’ harks back to multitrack tape recording systems, when the process involved re-recording audio from some tape tracks onto one or more other tracks. The audio was ‘bounced’ from track to track on a tape system.
Doing this can be a good idea for a number of reasons.
- It can help conserve resources. In a DAW environment, it can allow you to conserve CPU (by rendering a track that uses CPU-hungry plugins, then deactivating those plugins). In a hardware environment, it can allow you to use a specific piece of equipment – either an instrument or an effects processor – on many tracks at once.
- It can make a project more portable. By rendering tracks, you can bring the project files to another studio – even if that studio doesn’t have the same plugins or hardware that you do. It can even allow a project to be shared between different DAW platforms or studios based on harware/software (and mixtures of both).
- It can help you make decisions. Rending tracks locks you in to a particular sound and performance. While realtime generated audio allows you to continually adjust the track (and for MIDI – the performance), rendering those tracks to audio files creates a snapshot that cannot be changed much (or without difficulty). This can be made part of a project workflow to mark the end of one stage and the beginning of the next stage.
Obviously, there are a couple of downsides. One is the track space. In a DAW environment, rendered audio files take up additional hard drive space. This is usually not an issue, because hard drives are cheap and high-capacity. It’s more of an issue with hardware recording systems, because some have some very strict restrictions on how many simultaneous tracks are available at once.
The other downside is that it prevents further editing of the track – both the effects processing settings and (for MIDI) the performance. This is usually mitigated by keeping a deactivated copy of the original realtime generated track.
Personally, I use track rendering at two points in my workflow:
- When the artist brings their demo to my studio. My artists work on a variety of platforms, so I ask them to render each track to bring them into my studio for further work.
- When using hardware instruments, hardware effects processors, or CPU-heavy plugins. Obviously, this is to allow these tools to be used many times in a project. It also allows projects to be recalled at later sessions (I use some hardware devices that are very complex and have no presets). I also use a CPU-heavy amp simulator, which I routinely render to audio as it’s being recorded – because I prefer not to have restrictions on how many guitar parts I use (and it’s no different to recording an audio file of a physical amp).
The decisions and if and when you render tracks to audio depends on your project workflow, your studio resources and your preferred style of working. Obviously, there are no generic rules – just what works for you.