Categories
Articles

When is it time to stop mixing and start mastering? (Part 2)

This is the second part of a three part article. View part 1 here. View Part 3 here.

Can you hear the vocals properly?

Or in fact, whatever is the focal point of your track. As I write this I’m listening to a duo of harp and spanish guitar, and the huge sound of the harp is swamping it a little at times, and it’s a problem when the guitar takes the melody. Lead vocals particularly are a problem because the more you get into it, the more you lose perspective. Perhaps it’s your own vocal, or you’ve just got to know it very well from working on it, but once you know the words and the melody intimately, you will hear them more easily than a brand new set of ears. Try this experiment: switch the tv down to a level where you’re really struggling to hear the dialogue clearly. Now wind it back a minute and switch the subtitles on. Obviously you know what they’re saying now cause it’s written on the screen, but notice that now you can also hear them better too! That’s not about levels, that’s about perception and it also applies to music. The more familiar you are with a vocal part, the better you will hear it, even if it’s too quiet.

If you conclude the vocals are getting lost then the question becomes do they just need turning up or are there other elements getting in the way? A crash cymbal perhaps (bloody drummers), or one of those huge wall-of-sound guitar parts that eats up every frequency going (bloody guitarists)? If so, you’ll need a more nuanced approach to dealing with it, probably involving EQ and compressors.

Of course, once you’ve tried something you’re then back to trying to judge whether you’ve fixed it or not. If you think you have, and time allows, my advice is go to the pub and leave it until tomorrow, as what you now need most of all is a fresh pair of ears. There are some other approaches to take but I will wait till a dedicated article to go into more depth.

Even though you’re the mix engineer, don’t be afraid to get a second opinion on things such as this. And it doesn’t have to be another engineer or even a musician! Just someone whose opinion you trust and who’s into their music. Their brand new perspective could be invaluable compared to you having been stuck in a small room with it for two days. If your lead vocal is too quiet, or perhaps in fixing it you’ve gone too far the other way (not uncommon), if this person is into their music they will be able to tell you. Don’t rule it out!

The last thing I will say on vocal level is that if you’re struggling to find exactly where to place it, then that’s partly because it’s really bloody difficult. It’s something that I’ve always found incredibly tricky anyway. Just persevere and you’ll get there.

Do the instruments have clarity and sit in their own space?

One of my favourite things to do, particularly if it’s a track I know intimately, is focus on one instrument and listen to it’s journey through the piece of music. I have been known to keep replaying the track and focusing on a different instrument each time. It’s fantastic for your ears and gives you more understanding of how great tracks fit together.

Actually, as good an example as any has literally just popped up on the radio as I’m writing, so that will do – Alanis Morissette. No, not that song… the one about irony, whose lyrics contain not one correctly ironic example, which paradoxically makes the song itself Ironic and correctly titled… no I’m talking about You Oughta Know. It’s a fantastic mix. It’s a pop tune so the vocal is upfront and the main focus, while the other elements all lock together yet have their own space as well. I think to be fair the drums are being kept deliberately small so that the guitar and bass can shine, but what’s wrong with that?

I’m not a particular fan of it, but I could listen to that Flea bassline all day and have no problems picking it out over the other instruments when I want to. Could I do that with the instruments on your track?

Does the track feel “glued together”?

Following on from that last point, but slightly contrary, as well as the instruments having separation and clarity, do they feel united and locked together?

Master bus compression is an excellent tool for achieving this, but also carefully chosen reverbs. There is often reverb on a mix that you don’t realise is there, but were it taken away, instruments that were recorded at different times in different spaces would now sound like it. A shared, small room or chamber reverb, with a carefully chosen pre-delay can work wonders at glueing things together.

Can the track still “breathe”?

This is more abstract but that seems as good a term as any. It’s slightly to do with not having too many different elements in there, but I mean it more in the sense of can it push and pull as it needs to? Compression is for taming dynamics, not obliterating them. If in the chorus the instruments get louder, then it’s important to make sure they do still have room to get louder.

I’m a big fan of using quite a few different compressors, all just doing tiny amounts. Like layers of paint, I think you get a better result with multiple thin layers as opposed to one thick one, but it does mean you have to be super careful not to go over the top, especially if you are adding mix bus compression. You won’t leave any room for the mastering engineer and besides which, if the musicians got louder, then so should the track. Compression is for tightening things up, not flattening it like a pancake.

Does it transfer between different speakers and headphones?

Of course, the mix will sound different on each, but does it still work? Does it translate? Bass is usually the trickiest element. It’s all very well having some thumping bottom end that sounds fat (phat?) on your big monitors, but what happens when you switch to a small bluetooth speaker? Does it just disappear? There will be the odd dance track where that’s slightly unavoidable, but what if you’re mixing a bluegrass band with an upright bass? There’s a ton in the upper mid that gives those bass notes their tone and definition and should be apparent whatever you’re listening on.

Is the audio technically sound?

Are there any clipping or phase issues? If your track is clipping in places (going over 0db), then you need to dial it back a bit. I know there is a technical retort along the lines of it’s ok for a certain number of samples, and certain DAWs don’t actually clip until +3, but my response would be why are you going anywhere near 0db anyway? Chasing zero is a game for the mastering engineer. If the RMS of your track is generally hovering around -20 with peaks hitting -5 then that’s plenty loud enough.

I can’t go into too much detail about phase here, but take a look at a correlation meter (range -1 to 1) and check that you’re at the very least staying on the plus side of zero.

This is the second part of a three part article. View Part 1 here. View Part 3 here.