Radio Ready Voice Narration Sound: How Do You Get It?

Radio ready voice recordings are absolutely possible to create at home. That’s the reason for this article. Someone wanted to know how I did it – which was, of course, at home.

I recently received the following comment from someone who had just completed The Newbies Guide To Audio Recording Awesomeness 1 video tutorial course on home recording:

First I want to congratulate you on an excellent job. The home brew audio [Newbies Guide To Audio Recording Awesomeness (ed.)] is the best home recording tutorial I’ve invested in. I am simply amazed at what you were able to do on a home computer without using expensive recording equipment. I’m particularly interested in knowing more about the narration. Did you use special compression, EQ or effects on your voice during or post recording to get it to sound so rich and radio-ready? I’d sure would like to duplicate that sound on my project.

Thanks

My Answer

Wow!  That was quite a nice testimonial for the course, for which I am VERY grateful.  I thought his question – about the process I go through to create what he calls “the rich and radio-ready” (don’t you just love the alliteration?) voice quality – deserved a thorough answer.  There are likely others who have the same question. So here is what I wrote back:

Thanks so much Larry!  I actually do use a pretty simple set of EQ, compression and noise reduction treatments on everything I do.  Here’s how it goes.

The Recording Process

I should start by saying that using decent recording gear does factor in here quite a lot. At the very least, I recommend a large diaphragm USB mic, such as the Samson C01U. It is only about $80 and is a big bang-for-the-buck. You just plug it into a USB port and go.

The next level up – and the way most pro’s do it – is a combination of a standard large diaphragm condenser mic and a USB interface. That would be something like a Focusrite Scarlett mic and interface bundle.

I record my voice in Reaper (or Audacity or Adobe Audition) with no compression at all. Though you could just as easily use Audacity or ANY other recording software.

I ALWAYS use a pop screen/filter to reduce p-pops/plosives and I get my mouth about 3-5 inches from the mic, which helps with the “deep/low” energy on the voice. 

Also, I try to address the mic slightly off-axis, just about 15-30 degrees, which also helps with the p-pops.  I do NOT run my narration voice through any filters or effects when recording.  I just make sure the recorded level is high enough that the loudest bits are just below peaking/clipping. You want the audio waveforms to look “large and chunky” (my non-technical terms :-P).

Audio level too low
Audio level too low
Audio level just right
Audio level just right (fat chunky blobs)

The Editing Process

Noise Reduction

Once I have the dry vocal recorded, I double-click on the audio item to open the audio in my editing program.  I use Adobe Audition, but the effects I use can be found in most any decent editing program, including the free Audacity. 

Run the noise reduction effect. Find a section of audio with NO VOICE IN IT. You want ONLY noise. You need to tell the editor what noise sounds like so it can remove it. If you happen to highlight a section that DOES have a bit of breath or voice in it, the result will NOT sound good.

See my post about using noise reduction in Audacity here: How To Get Rid Of Background Noise In Audacity.

Eliminate p-pops. 

Listen to the audio from start and note EVERY time you hear a p-pop.  Zoom in on just the part of the voice saying the “P” (or “B” or whatever the offending consonant is) and highlight it. 

Open the Graphic Equalizer tool in Audition.  View the “10 Bands” screen.  Starting with the 250 Hz slider, I progressively reduce the volumes to the left until the slider for the left-most band at <31 Hz is all the way to the bottom, which is 124 dB.

For me, that works out to -5 dB at 250 Hz, -11 Hz at 125 Hz, and -17 Hz at 63 dB.  You’ll have to experiment with your own settings for your own voice. 

Go through the entire file and run this EQ setting for each instance of a plosive.  This is made MUCH faster is you create (save as) a preset in the Graphic EQ tool and then make it a “Favorite” so it always shows up in the “Favorites” pallet, which I always have open.  That way I can just highlight the plosive and click my “Plosives” favorite, and it runs the preset.

And the above process is made MUCH faster if you are willing to invest $59 in a plugin from Accusonus called the Plosive Remover (part of their ERA Bundle of vocal plugins). You can get rid of all p-pops in a recording with a single edit. See my review of the ERA Bundle here: Review Of Accusonus ERA Bundle (Amazing Vocal Plugins). There is a video and audio samples in that review.

 Compress

Once all the plosives are out, I run a subtle compression treatment.  This is to even out the average volume of the recording and gives it a bit more punch and “presence.”

You can use a compressor plugin, as I describe below. Or you can do this manually by just reducing the level of the loudest peaks in your recording with the Amplify tool.

Compressor effect settings

I created a preset Adobe Audition’s compressor tool, which they call “Dynamics Processing.”  This preset has the settings: ratio of 2:1, threshold of -12 dB, output gain of 0 dB, Attack Time of 24 ms, Release Time of 100 ms in the Gain Processor, Input Gain of 0 dB, Attack Time of .5 ms and Release Time of 300 ms, and RMS (as opposed to Peak) selected in the Level Detector. 

Under General Settings I use a look-ahead time of 3 ms.  Before I run the compression though, I highlight the entire file and (if needed) raise the level until the loudest bits are higher than the -3 dB line.  If I have to lower the volume of one or two especially loud bits, I’ll do that first in order to get several peaks beyond the -3 dB line.  I do this to ensure that enough of the audio is above the compressor setting threshold of -12dB.  Then I run my preset compression settings as described above.

To find out more about what compression is, see my posts What Does Compression Mean In Audio Recording? and Vocal Compression Using Reaper’s ReaComp Effect Plugin.

Normalize 

After this light compression scheme that evens out the audio loudness, I want to make sure that the very loudest part of the compressed audio is RIGHT below or at 0 dB (without clipping), to optimize for loudness. 

That’s exactly what the tool called Normalize is for.  See my post Audio Normalization: What Is It And Should I Care? I simply select the entire file and click on the Normalize tool in Audition.  I always choose the “Normalize to 100%” setting, which puts the very loudest peak right at 0 dB.

That’s it!  Less expensive editors will have the same tools as Adobe Audition.  You can do everything above with Audacity, which is free. I only use Audition because I have been using a version of that program since 1996 or so when it was called Cool Edit Pro. 

I used to do EVERYTHING with it, including recording and mixing.  But as with many things, when a program designed to specialize in one thing tries to offer “the entire enchilada,” they won’t be as good at those ancillary functions.  This is true for Adobe Audition’s multitrack and mixing functions.

They are very good now, and you COULD do everything in Audition.  But it makes for a bloated program (which accounts now for its price tag) that is still an excellent editor. 

That’s why I use Reaper for my recording, multitrack, MIDI and mixing needs. 

For a more affordable editing program that isn’t bloated with tacked-on extras (including price!), you can still use Audacity. Though if you can afford it, I think ease and work-flow are better with a program like Adobe Audition.

I hope that helps!

Free videos from “The Newbies Guide to Audio Recording Awesomeness”

These video tutorials show you step-by-step, in plain language how to record multi-track audio in Audacity AND Reaper software.
ACCESS VIDEOS NOW

0 comments on “Radio Ready Voice Narration Sound: How Do You Get It?”

  1. Thanks for the awesome walk through! Works beautifully, though I find that with a good pop filter and the right breath control, there’s no need to go back and hit the plosives. Sibilants sometimes can be a bit of an issue though. What range to you usually start cleaning those up?

    And why not use REAPER for the whole process? It has all the tools you mention and is right up there with Pro Tools IME. Is this just because of what you’re comfortable and knowledgable with using, or is there something about REAPER that you’ve found lacking? I’m asking because I’m kind of new to it, but I’ve worked with Cubase, Logic and Pro Tools in the past and am finding REAPER to be absolutely jaw dropping so far. And not just because of the price. I think it’s easily worth what Logic is, but it’s only $60! I’m very interested in why you still use Audition when you’ve got REAPER, which is now starting to replace Pro Tools et al in more and more professional environments.

    1. You’re welcome:). Sibilance usually resides between 4KHz and 8Khz. So I typically sweep in that area to find where the worst esses are. I like to use the “pain method”:). I put on headphones, open an EQ plugin and create a very narrow bandwidth, and then turn it up by like 10-20 dB and start slowly sweeping. The places where the noise actually hurts my ears – THAT is where I reduce.

      About using a separate editor – yes, part of it is what I feel comfortable with. But there are other factors. Reaper actually doesn’t have all the tools that Adobe Audition or even Audacity does – not from a work-flow point of view. The main one is the ability to highlight a small section of audio and quickly turn up or turn down JUST that section. In Reaper, you have to slice that section of audio into a separate item before you can address its volume separate from the rest. It’s do able, but much harder. And in the end it leaves you with a track that’s all sliced up. Sure you can “Glue” the items back together after, but that is just a lot of mess and time for something that an editor can do quickly. Also – yes, you can use envelopes in Reaper to fine tune like this. But the thing I don’t like about that is having to give up screen real estate for the envelope lane. It’s all a matter of preference, and I do use Reaper for both things. It all depends on the situation. Audition also allows you to do things at sample level to the audio that Reaper cannot do. And as I recently wrote in an article for Disc Makers (https://blog.discmakers.com/2015/09/home-recording-tip-save-samples-to-speed-up-noise-reduction/) another thing Audition can do that Reaper can’t is save noise profiles for filtering out common – but very specific – background noises in your recording space(s).

      I’m with you though about Reaper’s awesomeness! I don’t see a need for any other DAW.

      Cheers!

      Ken

  2. Thanks, Ken! I don’t mind using envelopes. I’m fact prefer them. There are definitely some automation features lacking in REAPER. I really appreciate your insight, since, like I said, I’m brand new to REAPER. So far I’ve found that most things I thought it couldn’t do were actually there, just done with a different approach. I think what’s got me so enamored with it is the complete control over customizing everything within the app, and it’s most valuable feature is the support community at Cockos.. And most tasks can be automated through the Actions list, with lots of scripts available for free, but it could really benefit from a script recording feature. For all I know, one is hidden there somewhere, but that is a common complaint. I’m looking forward to reading your article.

    Thanks again!

    1. You’re welcome! You’re absolutely right about the support at the forum. You should ask them about the script recording functionality.

      Cheers!

      Ken

  3. Hi – this is great info – thanks! Had a question on the compressor settings.
    This preset has the settings: ratio of 2:1, threshold of 12dB, output gain of 0dB, Attack Time of 24 ms, Release Time of 100 ms in the Gain Processor, Input Gain of 0dB, Attack Time of .5 ms and Release Time of 300 ms, and RMS (as opposed to Peak) selected in the Level Detector.

    It seems you listed two sets of attack and release times (24 ms and .5 ms; 100 ms and 300 ms, respectively). I assume that Audition has two sets of fields? I’m using Sonitus, which only has one, for narration. Any suggestions which I should use? Thanks so much!

    1. Hi Ryan. You’re welcome! Glad it helps. You can safely ignore the 2nd set of attack and release times (the .5/300 ms ones). You are correct that AA provides 2 different fields for attack/release times. The one that is common to most compressors is the Gain Processor settings. The standard “place to start” for vocals is typically a 3:1 ratio with a threshold at about -20 dB (the above post should have said -12 dB, not 12 dB. thanks for finding the typo;-)). Attack and release times advised are fast attacks (I’ve seen anywhere from .1 ms all the way up to 30ms) and slow releases (usually between 100 and 300 ms).

      I hope that helps!

      Ken

  4. So with this,

    You do not have a compressor unit you just use a preamp doing the voice over recorded into a program reaper or abobe audition. then after the recording you use the effects options in adobe audition to use the compression etc? I am getting a preamp but was wondering If I need to get a compressor and not go through all of this in adobe audition? I do know lots of clients sometimes don’t want a lot of editing or what not but a dry file for auditions for voice over work and then sometimes they will take care of post for you. Not all the time but I have found out some do. Can you geet a way with just using a premap and doing some post work if needed in adobe audition and limiting yourself to the compressor with out it sounding heavily processed?

    1. Hi Sean,

      Correct. I do not process vocals while recording. The reason is simple. If you “print” an effect, compression, EQ, etc., while tracking/recording, it can’t be undone. there is no “undo” for that. It’s part of the recording. But if you record pure un-effected vocals, then you can process them however you like, or not at all. And if something doesn’t sound good, you undo and try again. the only time I would compress a vocal while I was recording would be if the vocalist had just way too much dynamic range and whose loud parts would clip on the way in. Of course there are no hard-and-fast rules in recording:). Everything depends on the situation and your preference. If it’s important to move extremely fast to get something to a client, then it might be OK to compress on the way in. It can work. Then you might not need to do much processing (or any) after the fact. Things also depend on how the audio will be used. If I’m doing a voice-over audition, I just assume the client will be listening in headphones, and so will hear any and every little defect. So I carefully listen to little breaths and clicks and other extraneous noises, and edit those out. But if the vocal is going to be mixed in with a bunch of instruments and other vocals, those things may not matter. Make sense?

      Kem

Leave a Reply

Your email address will not be published. Required fields are marked *