I have finally finished a song that I\'ve been working on for the past 2 weeks, tweaking, mixing, etc. and when I finally mixed it to audio in Cool Edit Pro, it comes out a little on the soft side around -15 to -12 dbs at peak. I went ahead and normalized it a little and it still sounded okay, but wanted to ask some of you experts out there whether or not it is good to do this. Will it degrade the audio in any way?
Generally speaking, normalize degrades the audio quality. But you probably need to know a bit more on that...
Changing the gain of a sound signal involves high-precision arithmetic calculations, usually using 32bit float processing. 16bit & 24bit integer audio files lack this kind of high-precision. After the 32bit processing, the result has to be \'rounded-off\' in order to fit the 16/24bit final format. So, in theory, every change you make to the volume of the audio would result in some precision being lost.
Generally, you needen\'t worry about this unless you are paranoid about your audio. The loss in precision is not really audible even after many gain changes. This is even more true if your audio files are in 24bit.
Using CoolEdit, you really don\'t need to normalize, since you have a \'master volume\' control, so if your mix comes out at -6db, you could raise the master mix 6 db to compensate. I assume the program then raises all volumes in the mixing process, and not afterwards, so it should be a little better than normalizing. Personally, I do normalize everything though, and I can\'t say that I can hear any audio quality difference. I use Wavelab for finalizing+normalizing.
Hmmm. I always thought that Normalization did not change audio quality. From what I\'ve read and experienced, Normalization simply applies the same bit manipulation (amplificiation or attenutation) to each byte without changing the relationship. The result is only either louder or softer. According to the help file in Soundforge:
To normalize a file means to raise its volume so that the highest level sample in the file reaches a user defined level. Use this function to make sure you are fully utilizing the dynamic range available to you. Sound Forge also allows normalization to RMS power. This means that a scan will be done on the sound file and it will be raised in level so its RMS power will be equal to the normalization level. This is helpful for making multiple files perceptually as loud as each other.
Well, when I record the audio tracks, I would have their levels as high as possible, somewhere around -3db to 0db due to have them in constantly harmonic-distortion levels into the audio tracks which are hard to avoid by PC noise, PCI-soundcard, CPU, memory, cables, power-supplies, etc.
Why? when I have the level-rec too low like -12db or less, regardless which amplifer program I use, it would added more noises to it by percentage of \"signal to noise ratio\". It means that, to increase the audio signal level, it also increases the noise levels. Of course the decreased signals also keeping the noise as lower as attenuation signals.
You can easily see it in the waveform editor program even you could easily hear its noises. Of course when the recorded signals too clean (none or very low noise), then you don\'t have to worry about it.
Hope this helps
[This message has been edited by LHong (edited 11-04-2000).]
Read through any serious digital audio paper and you will see that normalizing does incur some precision penalty in the audio data. It is however negligible, but since you want the facts, I give them to you
There are however cases when it doesn\'t. If for example the audio is in a 32bit file format. In your sequencer, when running your tracks + effects, the internal audio resolution is usually 32bit, so it would be a good practise to make sure the digital mixdown is as close to 0db as possible. You can always add a compressor/limiter as a master effect in order to create a \'louder\' mix. Alternatively, convert your mixdown file to 32bit (if your sequencer doesn\'t allow direct 32bit mixdowns) before mastering.
When you normalize audio your program (cool edit in this case) goes through and finds the highest peak in your song. Then it boots that level up to 0db or whatever you set to be the ceiling. All the other audio is boosted the same amount. This can be bad if the noise floor was already noisey which in your case probably was not since you did it all in the digital domain. Hope this helps....
food is completely correct. Basically what is going on, is what goes on in any digital manipulation of audio. Mathematically speaking, every time you manipulate a file in the digital realm, a few numbers get \"rounded off\". In turn either adding or subtracting some artifacts from the original audio signal. This is not something that most people hear as audible changes. Especially in the newer audio editors that do processing in 32 bit, and when files are kept at 24 bit. Even at 16 bit most people wont hear a difference, and you need to be an audiophile/geek to really say that you hear anything,...and even then its negligible on whether its important to the listener.
What you do have to watch out for is, multiple processing. The more you digitally manipulate a file, the more numbers get rounded off. Soon things like loss of stereo spread, or a \"harshness\" in audio start to become audible.
Again this doesn\'t mean that you can\'t do some processing to bring these things back or hide other artifacts...blah blah blah....god long stupid post and all I\'m trying to say is that YES technically it degrades audio, NO its not something you should worry about unless you have a lot of different apps to choose from (or outboard gear) to do the actual processing.
With all due respect, please explain this to me. During the Normalization process, an EXACT binary value is mathmatically added to the binary value of each and every one of the 44100 double-byte words that occur each second for each channel, therefore each byte is changed an equal amount, an EXACTLY equal amount. This in turn doesn\'t change the amplitude relationship of any of these 44100 double-byte words to each other. If no changes occur in this amplitude relationship, and only the bit-level value has changed - still 16 bits per sample, 44100 of them per second, with absoulutely no analog interstep - how exactly is the audio quality changed except for amplitude?
I honestly can\'t tell if it is degrading. It still sounds fine to me through my Mackie HR824s. I just was wondering if by some reason it may make the sound thinner but gigastudios \'capture to wave\' is so clean once I put it in CEPro, that there really is hardly any noise floor at all unless I crank it up real loud and ad a bunch of high end. Thats another thing that bugs me. When doing your final mixdown, how much highs or lows should you add? Considering everyones got different sterio/car systems. What if their treble is cranked all the way up, that would make my recording sound terribly thin even though I mixed it on my Mackie HR824s and they sounded fine.
I\'ll take the CD and put it in my sisters cheap compact sterio system and it sounds to muffled, but sounds great on my Mackies. Ohh, the joys and insanity of mixing!