Continuing our discussion started in the previous article, here are some other thoughts that may be of use, humbly submitted for your viewing (and listening) pleasure! There are a number of additional topics that can help to play important roles in creating a successful mix. To try to make the best sense of these concepts etc., I’m going to use an example mix for referencing to specific timings within this a recent project, which can be found on SoundCloud here:
What this article will do is to take you through the mix process for this tune. We know, from Part 1, how to get audio into Auria, so from here on, we will be concentrating on getting the mix ready so it’s easy to work with.
Additionally, I will talk about the concept of spatial and temporal placement within the mix and how to achieve depth and movement. We’ll also look at the specific processing of some of the tracks, especially as how they relate to these concepts.
Auria is a not just a world-class DAW, it is also a great teaching tool. For a small investment (well, well worth it in my opinion), you have a really well equipped recording and mixing solution. Not to mention how much damn fun it is! I mean just see what I have under my arm for this project:
This project was originally started in Propellerhead’s Reason 6.5 on my desktop Mac, but when I started to use Auria, I ported the tracks to the iPad to work on them in Auria.
As we saw in my Photoshopped picture above, the project consists of twenty-three tracks. We’ll have a look at them now at a readable size in these screenshots:
I use separate tracks for every instrument, to be sure I can properly balance and adjust as I may require. Notice too that I assign every track used to a subgroup channel.
Subgroups are eight separate submix busses to which you can assign any track. This allows similar tracks to be summed together for handling by a single subgroup channel strip. Using subgroup faders to do macro-scale level changes is a common and productive mixing methodology. (This is an especially useful functionality to have when doing live mixing where quick access to the whole mix can be placed in a small area.)
In this diagram, from the Auria User Guide, we see that the Subgroups provide a location between the Channel Strips and the Master Strip for control and Insert access.
As may be seen in the above screenshots, tracks 1 through 8 are all drums and percussion. Because the kick and snare are so important, they share a submix group (there are no other drums), and the remaining cymbals and percussion share another subgroup. The bass, because of its importance, has its own subgroup.
The remainder of the tracks split into subgroups for: Piano, Pads, Leads, Vox and Instruments.
By having all the tracks appear as part of a subgroup, I can keep the mixer scrolled to the right and manage all project levels in one place in the GUI.
Don’t forget to go into the Setting Page and set the Tempo to match any outside clock sources. This will make sure that Auria’s Edit Page grid will line up with your sequencer’s source file, and make it much easier for editing.
If you need something smaller than ‘1/16 Beat’, use the ‘None’ selection and then zoom way in to get the best resolution and you can place the audio to yield the effect you’re after. In critical, freehand editing like this, the Undo button is your best friend.
You can duplicate a track (Menu > Add Track; copy the original track’s audio regions to the new track), and then use the move function to offset the duplicate slightly to widen it out for a doubling effect.
This ‘None’ selection is also great for nudging a drum hit slightly behind or ahead of a beat for a laid back or tenser effect.
As mentioned in Part 1, upon commencing a project, I go to the Master Channel and set up the FX Send Inserts. This is where I set up a compressor to act on the overall mix. The BussPressor works well for this, but on this (and most projects now), I use the FabFilter Pro-C compressor. For now, I’m finding the ‘Gentle Wide Mastering’ preset works well.
From there, the signal goes through the PSP MicroWarmer.
This combination of devices provides a gentle compression while enhancing (warming) the output by adding analog-sounding tape saturation (harmonic distortion). Once this effects chain is in place, then I’ll start to construct the mix.
A PSP Echo, set for some synced repeats that can be targeted by specific instruments gives a nice, very usable effect.
Building the Foundation
Drums-and-Bass, the cornerstones of most tunes are what we start listening to, both individually, and together, to establish a good mix between them. It is this base upon which we add layers to decorate and then polish the song.
In the instance of this song (ported from Reason), I decided to replace both the existing kick and snare drums.
Seen above, the ChannelStrip itself is not used, but the first insert device is the Drumagog 5 plug-in. This plug-in allows easy and effective replacement of existing audio drum hits with samples from within the plug-in.
Drumagog 5 lets you adjust for triggering, has dynamic tracking and you can also mix the original and triggered sample.
The Pro-Q equalizer following the Drumagog 5 boosts the kick at 50Hz for effect.
For the snare, we have a similar processing setup, as I was again not happy with the original snare sound. So again, we use Drumagog 5 to replace the snare hits.
Here’s the Drumagog 5 interface and you can see the samples (middle-top), the triggering pane (lower-left), and the controls for blending etc. (bottom-centre-right).
As we did with the kick drum, we use a Pro-Q equalizer after the Drumagog 5 to give the snare a distinctive tone.
The equalizations you see on the kick and snare were chosen by using the Pro-Q’s Analyzer function, set to Pre-EQ. This shows you via a background grey trace, the spectrum of what you are listening to.
This analyzer function is invaluable for “seeing” issues, both positive and negative. Resonances are easily spotted, and even elusive ones are findable with a narrow Q notch filter with a high boost (or cut; whichever is most appropriate) swept back and forth. Once identified, it’s easily dealt with.
By using shelving and moveable, broader cut EQ points at both the low and high end, you can easily isolate any area of the spectrum for usage.
For the bass, first a HP Filter roll-off below 30Hz, then some Pro-Q equalization.
What you see here (and hear in the tune) was arrived at by first just getting a rough level mix between the three instruments, then listening to them against each other (in different combos), as well as solo.
What I listen for is a balance that works to provide the whole bottom end that drives the song. The EQ points that I choose help to emphasize a tonal range that is as unique as possible to make sure each instrument is an individual character.
(As you move further into a mix, this process of individualizing instruments becomes more important, and in some cases more difficult. Don’t forget about harmonic mixing, where frequency1 +/- frequency2 = frequency 3. Now, sometimes you will want to mix tones to produce harmonic effects, but it also means that atonal mixing can also occur.)
NOTE: I might normally have individual compressors on the kick, snare and bass, but in this case, I had already compressed those tracks in Reason, so there was no need to do so in Auria.
Speaking of compression, one of the things that I’ve learned, especially thanks to effects plug-ins, with their buy-once-use-many generosity*, is that with access to lots of compressors, you can split the processing to great advantage. For my kick/snare/bass track for example, I typically have a compressor/limiter on the channel strip, then an additional compressor on the subgroup and then the final compressor on the mains. This means that I can distribute the compression to preclude any one compressor from having to work too hard and possibly distort.
*I remember when equipping a demo studio back in the 70’s that we could afford only a Tascam 12 x 4 x 2 console, Tascam 1/2” 8-track tape deck, and a single outboard rack-mount parametric EQ, a stereo spring reverb (both with blue faces, but I can’t remember the brand), a lone Urei 1176LN limiter and an Eventide flanger. Now, I have a ton of virtual devices and I can use them endlessly (CPU willing). I LOVE digital!
Sonic Placement and Movement
When you set up to do a 2.1 mix (2 stereo channels, 1 subwoofer), you should sit comfortably before your monitors, or with good headphones, and imagine a soundstage before you. Pretend you’re seated at the prime mixing location of a live gig, you see the stage with the players on it and they range across the stage left-right. Where they are in that left-to-right spatial plane is determined by the position of the Pan control on the channel strip.
Generally speaking, and in the case of this mix, I set all three main instruments: kick, snare and bass, to dead centre on the pan pots. The reasoning is that the combined instrument spectrums share a nice, wide space to act as the bed of the song.
3D spatial placement is also made possible by delay (and, of course, reverb, a more sophisticated form of delay). We speak of delay and especially reverb in terms of being “wet “ or “dry”; “dry” meaning a signal with no reverb, and “wet” one that is all reverb return. By varying the amount of dry/wet, we can introduce varying amounts of reverb (or delay).
Reverb and delay allow us to introduce a sense of forward/backward to a signal, as well as implying the size of the space you are emulating. With a short, slap-back echo, one may envision the instrument quite close in a small, bright room; with the Convolution reverb, one can be in a huge cathedral or cavern.
Using combinations of delay and reverb can produce effects such as flanging, and chorusing. Chorusing is lovely effect for adding movement to a track. Setting a slow sweep stereo chorus will provide a great stereo depth, and by adjusting the Spread widely, you get an auto-panning effect that I just love for crash/splash/china cymbals as the cymbal hit pans across the stage. (Have a listen at 1:28 in the audio track.)
Reverb can be very tricky to use effectively. The most common mistake is using too much. Beware: a composition paddling desperately along in a sea of reverb or echo is generally named Titanic.
Think first about what kind of space you require for a specific instrument and then use the best plug-in (in this case the Convolution Reverb that is standard with Auria). Find a preset that is appropriate and then do yourself a favour and turn the Mix knob all the way to the left (no reverb effect) and only then increase the Mix knob very slowly until you get the level you think is ok… now listen for a while and then ease the Mix knob back a bit more and try that as you start to introduce other instruments (and reverb signals perhaps).
NOTE: With the reverb on the Aux bus, you’ll want to have the Mix level set to 50% or higher to keep the gain stages reasonable. Gain staging refers to keeping the gain through stages/devices (that effect gain or have a gain control) fairly equal so that no stage is starved on the input or has an output too close to clipping.
I typically set up a Convolution reverb on Aux 1 to be used as a general overall reverb. Instruments that just need small amounts to “seat” them into the mix, or occasionally to give them a quick boost for effect, use this reverb. Remember that this is for easy, general usage. Don’t be tempted to run more than a small number of tracks through it, as you’ll start to get a mash of reverberations that muddle rather than seat.
Specific instruments, like leads and/or strong rhythm elements may need to have different, more tailored reverb added to them and this is where we want to use a reverb on the channel strip insert point. This way, we can pick a reverb or delay that is targeted for that specific instrument. Reverb on a lead instrument serves to make the instrument sound louder by making it fuller in both a temporal and tonal sense as the reflection waves coincide and reinforce.
Space in the Mix
Continuing this theme, lets have a look at some of the other tracks that have specific processing:
The ‘Solo’ instrument can be heard in the SoundCloud file starting at 1:09, and uses a FabFilter Saturn plug-in to add a gentle, warm distortion to widen it out a bit.
Next, we send it into a ClassicVerb Pro to help spread it more and sit it back in the mid distance.
NOTE: The original sound I used was processed with a moving pan chorus in Reason and came into Auria this way.
Next, I needed to tonally ‘place’ the Layer Piano and the Strat instruments (both playing the same note sequence) so that they would compliment each other yet stand distinct…Pro-Q!
A carefully planned and executed mix is a dance of tones and timbres, beats and textures, all over a duration, and all within a space that you define and control. Different participants of the dance enter and exit the soundstage, and may move across it as well. They may be also be placed or moved front/back through the soundstage. The ability to choreograph this dance is what makes a mixing engineer great and elevates a tune.
The best dances may have an airy feel to them, even when anchored over a solid base. The lead instruments should glide in and out of the soundstage, supporting instruments cocooning them within a surrounding soundstage presence. The time that the players spend on the soundstage should not create a busy feel, but rather a shifting collage of elements that weave around and amongst each other.
Listen to your best reference tunes with respect to this dance and see if my meaning doesn’t become clearer.
One of the things that software DAWs have that make our lives much, much easier is the level to which almost anything can be recorded… and then played back. Auria records automation from a touch. Once the ‘W’ key arms the write capability, Auria then awaits a touch on control(s), which it duly records. Now disarm the ‘W’ key and as long as the ‘R’ key is pressed and lit, the automations will be read and played back.
Below you see the Master track with a volume automation. You see that the song starts off strong, but then drops back, and then begin to build, again up, then down and finally a strong climb to a crescendo at the finale. This is just a simple example. I also use automation on individual tracks to mute parts, direct panning, adjust levels and on and on.
The automation functionality gives us the equivalent of many arms, each with many hands, turning this, adjusting that… we can build up the mix part by part, performing bold or intricate adjustments, and have the ability to re-edit it at any time. Astonishing!
One of the most important things I can tell you is don’t abuse your ears. When I was touring with April Wine in the 70’s, mixing front-of-house sound, I mixed loud (as they wanted, the music demanded and shit, it felt good then), but I now have to put up with reduced upper end sensitivity due to tinnitus. If you intend to make a career in sound/music then use reasonable listening levels and if that’s not always possible, then use reasonable ear protection.
‘Listening’ for mixing is learning to listen at different levels, but predominantly low to middle levels. This will allow you to listen without getting ear fatigue. It also allows you to gauge the difference in low end between that “reference” level of low-middle and higher levels (where the Fletcher-Munson equal loudness contours start to emphasize bass response more).
Listen for only so long before you give yourself a five to ten minute break (away from the DAW). It is really important to have these frequent breaks as it will help to highlight issues much more readily than if you plow relentlessly on.
Listen on as many output devices as you can: monitors, headphones, car radio, iPod earbuds (Egads! More like earblugeons), an iPod speaker dock… you get the idea. Listen in mono and see if the mix works still. The more devices you can get it to sound good on, the better the mix will be.
For those of us who don’t have the luxury of great monitors, try for the best headphones you can afford. Personally, I recently bought Sony MDR-1R over-the-ear phones. I auditioned about 15 different pro-level headphones before deciding for these because of the enhanced bass performance. The additional bass response makes mixing much easier to get a proper low frequency to high frequency balance that I can more easily relate to my speakers.
BTW, even if I could afford good monitors, my wife would never allow me to use them. She’s got a thing where she can’t read/do anything while there’s music playing. And shouldn’t we honest and ask: why the hell would anyone want to have to listen to the same bloody piece of music (!?!) over and over and over again!!! OK, this is where we all take a moment and go and give our long-suffering Significant Others a kiss and a hug and thank them sweetly for their immense tolerance.
For those of us without good reference monitors, don’t despair… use what you have and use reference tracks that you know well and are germane to what you’re mixing. If you can recreate the sound that you hear and know from the reference track (as you hear it on your speakers/phones), then your mix should be close to what you expect to hear. This will be especially true if you’ve done your due diligence and listened on as many different speakers as possible.
I also suggest that you do your critical listening with your eyes closed, or at least not looking at the Mixer or the Edit screens. It’s too easy for your attention to be captured or diverted visually, and you’ll start obsessing about this or that rather than listening critically.
Finally, have others listen to your mixes too. And not just musicians, ask a variety of people and see what reactions you get. Remember that a good mix needs a good audience.
I hope these thoughts will spark some creativity for you. Enjoy!