When mixing my recordings, I can't get the vocals loud enough without killing every other instrument

#1
Okay, so I record songs for fun, and with everything that has more than one guitar track (+ bass + drums), I get the problem that the vocals are always too quiet. Like, I-can't-hear-a-thing-quiet. When I turn them up until I can hear them through all the other instruments, they clip heavily, as does the entire mix then, and if I turn down all the other instruments until I can hear the vocals without clipping, the guitars are too quiet to really hear them.

Right now, I've got this problem with a mix that has rhythm guitar and two complimenting lead guitars (60% Left/Right panned). The rhythm guitar is turned down a lot, it's only a noise in the background, and if I lower the volume of the two leads so the vocals come through without fucking up the mix, I can't hear the lead guitars in a satisfying way any more.

Anybody got a tip on how to fix this for me? I would hate to resort to a limiter to fix this, it's such a half-assed solution.

Greetings, Hashtag
Last edited by HashtagMC at Mar 5, 2017,
#2
I assume you are already using appropriate EQ/compression for the vocals, so without seeing the session, one other thing I would suggest is automation. It isn't unheard of to pull individual syllables or words up and down depending on relative levels.

I wouldn't describe limiting as a "half-assed solution". If it sounds good, do it. I don't know if it will fix your exact problem but try it, because if it works then who cares how you achieved it.

I think providing a short audio clip might help because the problem might not even be the vocals, it might be masking from the other tracks or something.
#4
Both good suggestions so far. There are likely a few things at play.

One: masking = when two things of similar frequency compete for the ears' attention. Like, if you have two people talking in a room, neither of them is very easy to hear. However, if you have a voice in a room and a truck rumbling by the window outside, it's not as hard to hear the speaker. Guitars and vocals compete for the same space.
Two: dynamics - compression and limiting are useful tools.
Three: gain staging. The idea that turning things up in a channel until they clip and distort is obviously bad. Gain staging is (in part) learning how to work with those levels.

CT
Could I get some more talent in the monitors, please?

I know it sounds crazy, but try to learn to inhale your voice. www.thebelcantotechnique.com

Chris is the king of relating music things to other objects in real life.
#5
Like Chris mentioned your instruments are fighting for the same frequency space as your vocals. The task of "notching" may be helpful. Start by playing back just the vocal track and watch the wave form to see where the vocal is primarily showing most of it's peaks (let's just say as an example it's around 1500-2000 hz.) Once you know where the vocals primary frequencies are look at the other instruments one at a time the same way. You should be able to see which instruments are also fighting for space in the same frequency range. Try reducing the frequencies in that range on the individual instrument tracks that are hogging those frequencies which will give the vocals their own space. It takes some time and tweaking but it usually works well and allows the vocals to open up in the mix.

Getting vocals to sit in the mix is not always easy to do. I went to a day long recording seminar a few years ago and at least 1/3 of the day was spent talking about this subject.
Yes I am guitarded also, nice to meet you.
Last edited by Rickholly74 at Mar 6, 2017,
#7
There are just too many things at play that you will have to do to cover in a post. 
- How you prep your other tracks to leave space with the vocals. 
- The vocal treatment itself
- Reverbs/delays/other time effects
- Automation
- 2 Bus effects (Comp/limiter/eq)
- Mastering

Thus the book recommendation, this guy goes thru every step of a mix, which is what you need to go over....or other books if other people can recommend something. 
#8
I'm currently enjoying one by Mike Senior (Sound on Sound) - 'Mixing Secrets for the Small Studio'.
"If you want beef, then bring the ruckus." - Marilyn Monroe
Last edited by USCENDONE BENE at Mar 9, 2017,
#9
If maybe that the playing of the guitars is inappropriate for the verses. Many experienced professional musicians do this automatically in live performances - many amateurs do not. The arrangement of the music is a real key to getting the vocals to come out properly.
You can use automatic volume and tone controls to reduce volume and change EQ of the guitars during the vocals to simulate this; but not get to the clear spaces provided by palm muting.
#10
Quote by Rickholly74
Like Chris mentioned your instruments are fighting for the same frequency space as your vocals. The task of "notching" may be helpful. Start by playing back just the vocal track and watch the wave form to see where the vocal is primarily showing most of it's peaks (let's just say as an example it's around 1500-2000  hz.) Once you know where the vocals primary frequencies are look at the other instruments one at a time the same way. You should be able to see which instruments are also fighting for space in the same frequency range. Try reducing the frequencies in that range on the individual instrument tracks  that are hogging those frequencies which will give the vocals their own space. It takes some time and tweaking but it usually works well and allows the vocals to open up in the mix.

Getting vocals to sit in the mix is not always easy to do. I went to a day long recording seminar a few years ago and at least 1/3 of the day was spent talking about this subject.

Quote by axemanchris
Both good suggestions so far.  There are likely a few things at play.  

One:  masking = when two things of similar frequency compete for the ears' attention.  Like, if you have two people talking in a room, neither of them is very easy to hear.  However, if you have a voice in a room and a truck rumbling by the window outside, it's not as hard to hear the speaker.  Guitars and vocals compete for the same space.  
Two:  dynamics - compression and limiting are useful tools.
Three:  gain staging.  The idea that turning things up in a channel until they clip and distort is obviously bad.  Gain staging is (in part) learning how to work with those levels.

CT

↑ The thing about instruments occupying the same frequencies helped me big time. I recorded something this day, and the vocals didn't come through at all. So I opened the EQs of both, vocals and guitar, and just with a simple EQ I got them to clash far less. Just two little cuts for the guitar, one in the lower frequencies and one in the higher, and the slightest boost on the vocals roughly the same spots, and with the same volume as before, the vocals can be heard just fine - without limiting or anything else (compressor of course).

#11
When you say 'with the same volume as before', did you compensate for volume changes post-EQ outside of ReaEQ? Because i'd suggest that the changes you've made (all cuts on the guitars and almost all boosts on the vocals) would make the vocal come through more from the volume difference alone without factoring in frequency masking.
"If you want beef, then bring the ruckus." - Marilyn Monroe
#13
That will vary depending on the the dynamic range of the audio you're feeding it though, which will be different for guitar (especially distorted guitar) and vocals. Vocals in particular are very dynamic in volume by nature. Also you'll probably have the compressor set up differently for each track so it may not accurately adjust the volume changes made in the EQ.

Horrible explanation. 

I'll make it simpler (and I apologise if you already know this):

EQ is basically adjusting volume of select frequencies in a track (rather than the whole track). You've cut by 4 and 6 dB on the guitar in the low-high mid frequencies and then boosted in those areas by 3 dB on the vocal which will contribute to making the overall vocal track louder and the guitar track quieter. On the right-hand side of ReaEQ is a handy gain knob. Use this to compensate for the gain changes you've created in your EQ moves.If you aren't sure how much to adjust by, use your meter (or a volume meter plugin) to see how the volume changes when you bypass and engage the EQ plugin.

This is why plugins like SlickEQ are handy for having automatic gain-adjustment - if you boost a frequency range significantly it will automatically bring the overall volume back down so you aren't tricking yourself into thinking the boosted EQ sounds better just cos it's louder.
"If you want beef, then bring the ruckus." - Marilyn Monroe
#14
To be honest, I don't know shit about mixing other than what I taught myself by trial and error, so I set my compressor up pretty much the same for every track, and only vary the amount of compression (GComp doesn't have the usual threshold and whatnot knobs) and the attack.



I usually set compression on 50% for everything, 5-15% for the master track, and 25-40 for acoustic guitars. Attack varies between 5ms (master) and 1-2ms for vocals with man p/s/t sounds. Gate is always around 48dB, because that's the volume of the noise my mixer produces.

Lo and hi cuts I do with the EQ, and I almost never use make-up gain, because I do that with the volume control. Probably all wrong, but it works. 
#16
I now do this masking thing on everything I record, works like a charm. I usually boost the guitar around 300 and cut it at 2500, and boost/cut the vocals accordingly, and the vocals come through perfectly while set to around 4db less volume than before, no clipping at all.