This is the third in a 3-part series sharing the five things I really wish I'd known about audio recording when I was a newbie. The first one talked about stereo sound. Last time (in part 2) I told you what EQ means. This time I have 2 more tips that involve EQ. It turns out that EQ is a pretty handy thing to know about. So here are numbers 4 and 5 of the 5 things I wish I'd known.
We humans have the ability to only hear stuff in the range of 20-20 KHz (KHz = "kilohertz"). For example, a baby's cry occurs predictably at the frequency we are most sensitive to, around 3KHz.
This is likely a survival thing for us. It's pretty important to be able to respond to the cries of our young. How is this relevant to modern audio recording? Well it tells us that certain sounds - bass guitar, acoustic guitar, vocals, hi-hats, etc. will pretty much always be at the same frequency areas.
Also, problems such as p-pops, saliva clicks, and sibilance (for vocals), as well as "mudiness" and bass problems (kick drum fighting the bass, etc.) all happen at the same frequency places.
A bass guitar be down around the low frequencies of 80-100 Hz. So will the kick drum. So that helps you separate them by boosting one and reducing another at nearby frequencies so they can both be heard.
The electric guitar will usually be in the mid-range between 500Hz and 1KHz. So will keyboards, violas and acoustic guitars and voices. So you can use certain EQ adjustments to separate those.
Clarinets, violins, and harmonica’s tend to generate energy in the upper-mid-range, around 2-5 KHz. And stuff like cymbals, and tambourines will be in the “highs’ up around the 6 KHz area.
Now that we know where specific things live in the range of hearing, we can adjust volumes at JUST those frequencies without affecting the rest of the audio.
Knowledge of this “range of human hearing,” and how to use (or NOT use) an EQ will come in handy more often than almost any knowledge. Once we know where to quickly find where a sound is on the EQ spectrum, we can surgically enhance, remove, or otherwise shape sounds at JUST their own frequencies, without affecting other sounds at other frequencies.
But how in the world can we adjust volume in just one narrow frequency, say 100 Hz, without also changing the volume at all the frequencies? Hmm, wasn’t there some discussion about a thing called an “EQ” that had a whole bunch of sliders on it? Could it be that those sliders were located as specific frequencies, and could turn the volume up or down just at those frequencies without affecting the rest of the sound? Why, yes. It could be. Now you know.
Once you know where the frequencies of certain instruments are likely to live, you can use an EQ to prevent these sounds from stepping all over each other in a mix and sounding like a jumbled mess, with bass guitar covering the sound of a kick-drum, or the keyboard drowning out the guitar.
Since every different sound has their own volume control, it seems obvious what to do if something is too loud or quiet, right? ‘With multitrack recording software, can’t you just simply turn the “too loud” track down, and vice versa. I mean, isn’t that what “mixing” means?’
That’s what I used to think too. The answer is… “only sometimes.” For example, even after spending hours mixing a song one day, I simply could NOT hear the harmonies over the other instruments unless I turned them up so loud that they sounded way out-of-balance with the lead vocal. It was like a bad arcade game. There was simply no volume I could find for the harmonies that was “right.” It was either lost in the crowd of other sounds, or it was too loud in the mix.
Then I learned about the best use of EQ, which is to “shape” different sounds so that they don’t live in the same, over-crowded small car. Let’s say you have one really, really fat guy and one skinny guy trying to fit into the back seat of a Volkswagen Bug. There is only enough room for 2 average-sized people, and the fat guy takes up the space of both of those average people already. Somebody is going to be sitting on TOP of someone else! If the fat guy is sitting on the skinny guy, Jack Spratt disappears almost completely. If Jack sits on top of Fat Albert, he will be shoved into the ceiling, have no way to put a seat belt on, it’s just all kinds of ugly no matter which way you shove ‘em in.
But if I had a “People Equalizer” (PE?), I could use it to “shape” Albert’s girth, scooping away fat until he fit nicely into one side of the seat, making plenty of room for Jack. Then if I wanted to, I could shape Jack a bit in the other direction, maybe some padding to his bony arse so he could more comfortably sit in his seat. Jack just played the role of the “harmonies” from my earlier mixing disaster. Albert was the acoustic guitar. Just trying to “mix” the track volumes in my song was like moving Jack and Albert around in the back seat.
There was no right answer. But knowing that skinny guys who sing harmony usually take up space primarily between 500 and 3,000 Hz, while fat guitar players can take up a huge space between 100 and 5,000 hertz, I can afford to slim the guitar down by scooping some of it out between, say, 1-2KHz, and then push the harmonies through that hole I just made by boosting its EQ in the same spot (1-2 KHz).
Nobody would be able to tell that there was any piece of the guitar sound missing because there was so much of it left over that it could still be heard just fine. But now, so can the harmonies…because we gave them their own space! And we did all this without even touching the volume controls on the mixer. So it turns out the EQ does have it’s uses!
So those are the “big five” as I see it (#1 started on the first post in this series) If I had read an article like this when I first started down the path of audio recording, things would have been my learning curve could probably been shortened by a decade or so! I hope some young would-be recording engineers out there can benefit from this article the way I could not.