• Register
  • Help
Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Topic: Muddiness vs. Clarity

Share/Bookmark
  1. #1

    Muddiness vs. Clarity

    I notice that many of the scores I listen to here have great clarity in the rendition. That is, in orchestra and concert band scores, for instance, parts are spread across the digital stage and there is great clarity--very little muddiness. I listen to some of my scores, mainly concert band scores, and although I pan parts across the gamut and tone down the reverb, there's still a muddiness to the rendition. I know this problem can be overcome best in a DAW, and even though I do own Sonar X2 Essentials, I'm not there yet. In addition to toning down reverb and panning parts, here are some other things I've been doing: • Use solo or "plr" saxophone parts, not groups. • Pump up the highs and turn down the lows in flute, oboe, and clarinet staffs. • Pump up the lows and turn down the highs in low brass and low woodwind staffs. • Use stereo staging. Are there other settings in Aria that I can tune to achieve better clarity in my concert band score playback? I work in Finale mainly with GPO4 and COMB2 in the Aria player, without a keyboard. Or have I gone as far as I can without using a DAW? Thanks. Art
    Arthur J. Michaels
    https://www.facebook.com/composerarthurjmichaels

    Finale 2000 through Finales 25.4 (currently using Finale 25.4)
    Garritan COMB2, GPO4, GPO5, Audacity 2.1.3
    Core i7 860 @ 2.80 GHz, 8.0 GB RAM, Windows 10 Home Premium x64
    Dell 2408 WFP, 1920x1200
    M-Audio Delta Audiophile 2496
    M-Audio AV-40 monitors

  2. #2

    Re: Muddiness vs. Clarity

    Quote Originally Posted by gogreen1 View Post
    I notice that many of the scores I listen to here have great clarity in the rendition. That is, in orchestra and concert band scores, for instance, parts are spread across the digital stage and there is great clarity--very little muddiness. I listen to some of my scores, mainly concert band scores, and although I pan parts across the gamut and tone down the reverb, there's still a muddiness to the rendition. I know this problem can be overcome best in a DAW, and even though I do own Sonar X2 Essentials, I'm not there yet. In addition to toning down reverb and panning parts, here are some other things I've been doing: • Use solo or "plr" saxophone parts, not groups. • Pump up the highs and turn down the lows in flute, oboe, and clarinet staffs. • Pump up the lows and turn down the highs in low brass and low woodwind staffs. • Use stereo staging. Are there other settings in Aria that I can tune to achieve better clarity in my concert band score playback? I work in Finale mainly with GPO4 and COMB2 in the Aria player, without a keyboard. Or have I gone as far as I can without using a DAW? Thanks. Art
    Ah yes, the eternal quest for clearer mixes. It goes without saying that it's an endless subject, and a lot of different ideas can come up in discussing it. But I'm glad you brought it up, Art - I'll post a few thoughts, and maybe they'll be helpful - I hope they are, because you've certainly helped me tremendously on my own quest for getting up to speed on scoring for Concert Band!

    You're already doing a lot, probably more than a lot of notation users do. Panning instruments to discrete stage locations is important, and you're doing that. Going easy with the reverb is also a good plan--More on that:

    --Have you tried the Convolution reverb in ARIA? Being an Impulse Response engine, the results are instantly more natural than in the Ambience plugin. BUT there's very little control over it. Ambience has the advantage of many controls, including important ones like being able to EQ the reverb. Still, Convolution sounds more natural - Do try that if you haven't.

    Using solo instruments instead of groups from COMB. That's probably a good idea, since those soloists have clearer detail in their samples. I use the section/group COMB patches in Sonar only as extra layers, with the soloists still taking the lead.

    BUT if at all possible, it's good to avoid the Player versions of instruments. Depending on how many instruments you need in a piece, it's impossible to totally avoid them - And I'm sure you know the manual's advice to not use a Player which is derived from the same soloist, because that brings in phasing.

    The problem with the Players in spotlighted positions in a mix is that their sound is much more dull than the soloists. They are simpler, stripped down samples, not very natural sounding on their own, and they do have a muted, dull sound.

    You're using ARIA's handy EQ - that's good. It's best to subtract than add - So in the case of the woodwinds, try rolling off the bass (turning it down) but not boosting the highs. You may experiment with the middle frequency and try a slight boost at a place that sounds like it's boosting an essential signature sound of an instrument. A perfect example of doing that is with the GPO oboes - they're really not as piercing and nasal as real oboes, so I've been doing fairly severe EQ on them to emphasize their nasality, as well as reducing their bass content. The main point is to not neglect some experimenting with that mid range EQ knob to see if you can't help differentiate between the instruments.

    I think one of the causes for the muddiness you're hearing is that you're boosting the low frequencies of the low instruments. In fact, I'm positive that's causing you problems. Once you have a group of low instruments in a piece, and we always have quite a few in band music - the lows pile up on their own, and start causing problems even before you touch the EQ. Single low instruments can sound great, but in combination, that flabby rumbling starts creeping in. SO - I suggest you do exactly the opposite of what you're doing - You need to at least try doing slight cuts in the lows on those instruments.

    I've erred on the side of having mixes be too thin sounding, and of course I try to avoid that, but a bit tinny is better than too muddy - at least you can hear the individual instruments better.

    Stereo Staging - My jury is still out on that control in ARIA. Sometimes I'll leave it on, but I often just turn it off, and I do that because I think it helps muddy a mix more than clarify it. Run a test - Play an instrument with and without SS on. With it off, your panning is clear, you can hear exactly where the instrument's supposed to be. Turn SS on and you can't locate the sound as well - Move the pan control--the results are vague and unspecific. That's because the SS is providing the early reflection reverberation which comes at us from all directions - It does what it's supposed to,-- but when you get a whole group of instruments with SS turned on, --it can get messy.

    A greater degree of velocity variation and CC1/11 control is something you're hearing in DAW software productions also, when compared with many notation renderings. I would suggest using as many dynamic markings as possible, hiding all but the essential ones for your printed scores. And if you can edited the velocities more with the simple theory in mind that low levels cause a note to start more slowly, and high ones have a faster attack envelope.

    If you do start using Sonar, there are more audio production things you can do, using an Exciter being a big one. I think you're hearing the use of that fairly often on the clearer mixes you're liking. That brightens up the sound of a mix nicely if used in small amounts. A subtle bit of compression helps also.

    But, you're not getting into Sonar just now - So here's one easier thing I HIGHLY recommend for you. Get Audacity, the free audio editor, if you don't already have it or some other audio editor (they all do the same thing). You don't need to figure out the whole program - Simply use it to open up your Finale exported master recording to take a look.

    Here's a screenshot I did some time ago to show a Forum member a reason his Finale mix was sounding vague and unimpressive. The top part shows his recording as it looked when I opened it in my sound editor, Sound Forge. The lower image is after I simply Normalized the file so it was using much more of the potential volume range.



    Low volume contributes to the muddiness problem for several reasons. I just want to encourage you to get Audacity, look at your files, and use some simple tools like Normalize to get the volume up to what it should be. You could find yourself trying other tools available, like using a volume envelope to make dynamic changes even more dynamic. EQ is easy to work with also if you find that sum total of the mix is still sounding like it needs help.

    SO - there are some ideas. For me, getting as clear a mix as possible is always a challenge. It's not as if I always lick the problem as much as I'd like - Every mix is different with its own set of unique problems - So I sympathize with your desire to work towards cleaner mixes!

    Randy

  3. #3

    Re: Muddiness vs. Clarity

    Randy: Thanks so much for your response. Lots a great ideas here to try!

    I use Convolution reverb in the Aria player--not Ambience. I like it better than Ambience. I've also tried Gverb and a few others in Audacity, but I didn't care for them. I most often use the "modern symphonic concert hall" Convolution reverb selection for concert band pieces because it sounds realistic. I never place the send dial settings in Aria higher than the 9 o' clock position. That seems to provide just a touch of reverb. Do you apply reverb in Sonar and not Aria?

    I will try more soloist instruments than plr ones, as you suggest. And I do try to avoid soloist 1 instruments with plr 1 instruments (and 2, 3, etc.).

    I'll also try your suggestions about rolling off the bass for woodwinds without boosting the highs. I've been boosting the highs especially with flutes because they always seem to sound so muffled and soft. Bumping up the volume overall helps a little, but if you have any suggestions on how to get the flutes to sound like they don't have a blanket over them, I'm all ears.

    And I should have figured that boosting the lows in low instruments would increase the boominess and mud! I'll try slight cuts in the lows. I think I've been turning that dial way too much to the right (increasing the lows) for low instruments!

    I had a hunch that stereo staging was causing more muddiness, and what you said made me kick myself for not following that hunch. I will now, for sure! After I post this note, I'll check out some of my scores and turn stereo staging off and see what they sound like. I'm pretty happy with how I've panned my concert band scores lately--across the range and nicely balanced. But I think SS just defeats that panning and bunches everything more in the center. That equals mud.

    Maybe I'll also try being very selective about which instruments get SS and which (a majority) don't. I'll experiment.

    I use Audacity. And, by the way, the newest version (2.0.4) was released just a day or so ago. In Finale, I notice a difference between renditions that are saved in Finale as audio files and those I play back and record in Audacity (record as "what U hear"). Do you have the same problem in Sibelius? Anyway, after I record a piece in Audacity, I usually amplify the whole thing to -1. I will try the Normalize tool, though, as you mentioned, and see if there's a difference. Funny--I've always wondered if I'm recording pieces at the "correct" volume. How would I determine that?

    I'll try the other tools in Audacity, too. Volume envelope? What is that, and is it in Audacity?


    Thanks again, Randy, for your very helpful post! I certainly have a lot of new ideas and a lot to try!

    Art
    Arthur J. Michaels
    https://www.facebook.com/composerarthurjmichaels

    Finale 2000 through Finales 25.4 (currently using Finale 25.4)
    Garritan COMB2, GPO4, GPO5, Audacity 2.1.3
    Core i7 860 @ 2.80 GHz, 8.0 GB RAM, Windows 10 Home Premium x64
    Dell 2408 WFP, 1920x1200
    M-Audio Delta Audiophile 2496
    M-Audio AV-40 monitors

  4. #4

    Re: Muddiness vs. Clarity

    Hello, Arthur - Hmm, from your replies, I probably haven't given you all that much new info to try. Here, I'll insert some replies:

    Quote Originally Posted by gogreen1 View Post
    ...I use Convolution reverb in the Aria player--not Ambience...I never place the send dial settings in Aria higher than the 9 o' clock position. That seems to provide just a touch of reverb. Do you apply reverb in Sonar and not Aria?
    Ah, OK. Impulse Responses, just by their very nature of being the actual sound of real spaces, are almost always better than the artificial digital simulations. The only problem with the one in ARIA is you have no control over the EQ of the reverb, and that's a big minus.

    Yes, I use reverb in Sonar, not ARIA. But, in case you didn't know, when I mix a project, I work totally with audio tracks. I bounce all my MIDI tracks to audio, and then dig into mixing. I simply can't get the sound I want straight from MIDI.

    Quote Originally Posted by gogreen1 View Post
    ...I'll also try your suggestions about rolling off the bass for woodwinds without boosting the highs...f you have any suggestions on how to get the flutes to sound like they don't have a blanket over them, I'm all ears.
    Well, EQ is relative. Subtracting bass and not touching the highs, is the same as boosting the highs. To get a more pronounced difference between highs and lows, you can be even more severe with your bass cut, and some high boosting is OK, you just don't want to do it a lot, because the results can be unattractive.

    Be sure to try playing with the EQ Mid control - You'll find that it has the most effect on a sound, because you have control over the frequency being effected. Just start twirling the dials and listening to results. You'll find a sweet spot that will get your flutes more bright the way you want.

    Quote Originally Posted by gogreen1 View Post
    ...I'll try slight cuts in the lows. I think I've been turning that dial way too much to the right (increasing the lows) for low instruments!
    Good- I think you'll find that's your main problem. Those low instruments already have plenty of low end in the samples, boosting doesn't do anything but add boominess which equals muddiness when several instruments are playing at once. This is a Biggie note.

    Quote Originally Posted by gogreen1 View Post
    ...I use Audacity. And, by the way, the newest version (2.0.4) was released just a day or so ago. In Finale, I notice a difference between renditions that are saved in Finale as audio files and those I play back and record in Audacity (record as "what U hear"). Do you have the same problem in Sibelius? Anyway, after I record a piece in Audacity, I usually amplify the whole thing to -1. I will try the Normalize tool, though, as you mentioned, and see if there's a difference. Funny--I've always wondered if I'm recording pieces at the "correct" volume. How would I determine that?
    OOoh - You're already hip to Audacity. OK - and youre already bringing the signal up. Well then, Normalize isn't going to do any particular magic. I just wanted to make sure you were getting your recordings up to broadcast level. One thing about Normalize, as long as you already don't have in-the-red peaks in your file to begin with, it will only go up to what level you set and can't go over 0db. If you just boost the signal with a gain control or something, you could peak out. It looks like you have the basic volume issue under control.

    ---I'm concerned that you mentioned you see the "what U hear" message-- I think that only shows up when you're using a Sound Blaster card from Creative. Is that what you have?-- The subject is long and deep, but the bottom line is that home recording enthusiasts have long since pronounced these cards The Worst - No intention of offending, but they really aren't meant for sound production, and aren't capable of giving very good results. They're built for gamers, and all they need is loud playback, they're not doing anything as demanding as trying to actually produce and record music. You must get a home studio style audio interface with ASIO drivers. Better sound and Much, much less headache to work with.

    Maybe I ranted for nothing there - but I am suspecting you're using something from Creative, and if so, my strong recommendation is to replace it ASAP.

    Back to your question - You don't really need to worry about recording at a theoretically "correct" volume - As long as you use a sound editor, and you do, you can do the mastering work which brings the master recording up to what it should be. Mixes should be somewhat low - It's only in the mastering that it's brought up to broadcast level.

    You asked if I have problems with Sibelius - I don't ever record from Sibelius. In fact, I can barely tolerate the playback I hear as I work there - It's sufficient, I suppose, but I would Never let anyone here what it sounds like. I'm in Sib only to make my score look right, and don't care about working on the playback. That's why I tremendously admire anyone who can squeeze something that sounds like music out of any Notation program.

    Quote Originally Posted by gogreen1 View Post
    ...Volume envelope? What is that, and is it in Audacity?
    Yes, in Audacity, that's why I mentioned it. DAW software like Sonar of course thrives on envelopes - We automate many things, including the volume. It's the only way to get detailed mixes really. But in an audio editor, there are similar volume envelopes that can be used on a master file. The one in Audacity takes some getting used to, I'm still clumsy with it, much more accustomed to the easier one in Sound Forge - But basically you could take, for instance, a passage that's meant to be a crescendo, and make the volume change even more dramatic than you were able to squeeze out of Finale. You can start lower, end higher, by using the volume envelope to re-shape the .wav file. Used with discretion, that can really help punch up a mix.

    I'll wrap this reply up with a general observation which isn't all that flattering to Finale and other notation programs. I've heard some excellent recordings from Finale users who are adept at using Human Playback - But I still hear mostly mediocre recordings from Finale users, and I'm not sure why. I don't know if they're not using Human Playback, or are just using it clumsily.

    But to my ears, the hallmark of the typical Finale rendering is that it's too tame. I chalk that up to major issues like how it's really only 6 dynamic levels that people regularly use for volume control - p through ff - compared to the 127 levels of volume available in MIDI. Power users like Forum member David "Et Lux" Sosnowski always say that the secret of their power is in inserting thousands of commands that they make invisible in the score. Incredible - I can't imagine.

    I suggest, Art, that you start wading into your Sonar program. It very well could be that next level of sound production you're looking for. I remember when I first started using Cakewalk/Sonar - it all looked like an alien landscape to me. I thought the Piano Roll View must be something dumbed down for people who couldn't read music. Well - I got into it, and came the day it Clicked for me - This was letting me work with music in its purest form - sound - as compared to working with what I think are the severe limitations of only the notation which I consider basically a by-product of music. For me now, to see all my instruments' notes interacting in that multi-layered PRV view is the very heart of what I'm working on. I can See the sonic relationships between not just the notes, but their lengths, their velocity values, and how all the MIDI controllers I'm using are working on them.

    And so on - There's probably more you can do with Finale - but I know from experience that there's Definitely a tremendous more you can do with your music in a program like Sonar.

    Randy

  5. #5

    Re: Muddiness vs. Clarity

    I agree with all Randy has said with a few exceptions (well just 1).

    But first:
    I think one of the causes for the muddiness you're hearing is that you're boosting the low frequencies of the low instruments. In fact, I'm positive that's causing you problems. Once you have a group of low instruments in a piece, and we always have quite a few in band music - the lows pile up on their own, and start causing problems even before you touch the EQ. Single low instruments can sound great, but in combination, that flabby rumbling starts creeping in. SO - I suggest you do exactly the opposite of what you're doing - You need to at least try doing slight cuts in the lows on those instruments.


    The rule I learned in courses I have taken on sound engineering is that "less is more". That is to say cut frequencies first and listen to the result. You will be surprised at what happens. Even in the bass range. Cut frequencies in the low range for different bass instruments i.e. tuba and trombone don't necessarily play the same ranges in the bass clef. So, cut the range for trombone so the lowest pitch range is cut. That will leave room for the tuba to cut through the mix.

    As for my first comment about one exception to what Randy said, I have done mixes only using Finale and I try to NOT use normalize as much as possible. I know the reasoning behind it but I rather set the mix level right in Finale before recording it to a .wav file. It does mean that there will be hidden dynamics that include pppp and ffff and plenty of in between dynamic changes. I also edit my dynamic levels in Finale's Expression Tool editor.

    I was never a fan of normalizing and do remember someone (can't remember who now) onced told me Never, Never Normalize. I do remember it was a respected musician friend but I don't remember the details of the conversation.

    Anyway, that is my 2 cents worth for what it may be worth to you.
    [Music is the Rhythm, Harmony and Breath of Life]
    "Music is music, and a note's a note" - Louis 'Satchmo' Armstrong

    Rich

  6. #6

    Re: Muddiness vs. Clarity

    I 2nd the thoughts on never using normalise. I never touch it.

    On the EQ, I don't boost anything, I only trim.

    I apply high-pass filters to everything, but in different locations depending on the low range of the instrument. Contra-bassoon is the lowest instrument so that gets cut basically at the inaudible level of 20hz and below. Others creep up, tuba, at 25hz, troms at 40 or there abouts. Low percussion needs room around there as well. Violins, you can generally cut them around the 100hz range (but always check you are not affecting the sound quality).

    I do the same with the highs, cutting out some of the unwanted hiss from how percussion rumble that might creep up to that range. It depends on the samples used, and I would have to recommend a visual EQ that displays the frequencies affected.

    Something like this: (this is not my EQ line)
    Website:
    www.grahamplowman.com
    YouTube Music:
    My Channel
    Twitter:
    @GPComposer
    Facebook:
    Facebook

  7. #7

    Re: Muddiness vs. Clarity

    Randy: Thanks again for these great ideas! Boy, do I have a lot of "homework"!

    Well, EQ is relative. Subtracting bass and not touching the highs, is the same as boosting the highs. To get a more pronounced difference between highs and lows, you can be even more severe with your bass cut, and some high boosting is OK, you just don't want to do it a lot, because the results can be unattractive.
    Thanks for this explanation! When you say, "you don't want to do it a lot," do you mean do it selected passages where the flutes are prominent, for example--not the entire staff? I don't think that can be done in Aria.

    Be sure to try playing with the EQ Mid control - You'll find that it has the most effect on a sound, because you have control over the frequency being effected. Just start twirling the dials and listening to results. You'll find a sweet spot that will get your flutes more bright the way you want.


    OK--the mid EQ control. I'll try that!

    Good- I think you'll find that's your main problem. Those low instruments already have plenty of low end in the samples, boosting doesn't do anything but add boominess which equals muddiness when several instruments are playing at once. This is a Biggie note.
    I hear you on the "biggie" note.

    I'm concerned that you mentioned you see the "what U hear" message-- I think that only shows up when you're using a Sound Blaster card from Creative. Is that what you have?-- The subject is long and deep, but the bottom line is that home recording enthusiasts have long since pronounced these cards The Worst - No intention of offending, but they really aren't meant for sound production, and aren't capable of giving very good results. They're built for gamers, and all they need is loud playback, they're not doing anything as demanding as trying to actually produce and record music. You must get a home studio style audio interface with ASIO drivers. Better sound and Much, much less headache to work with.
    My soundcard is an M-Audio Delta Audiophile 2496. There might be other ways than "what U hear" of capturing the sound in Audacity, but it seems to work well for me. I'll take another look at Audacity and see if I missed something.

    Yes, in Audacity, that's why I mentioned it. DAW software like Sonar of course thrives on envelopes - We automate many things, including the volume. It's the only way to get detailed mixes really. But in an audio editor, there are similar volume envelopes that can be used on a master file. The one in Audacity takes some getting used to, I'm still clumsy with it, much more accustomed to the easier one in Sound Forge - But basically you could take, for instance, a passage that's meant to be a crescendo, and make the volume change even more dramatic than you were able to squeeze out of Finale. You can start lower, end higher, by using the volume envelope to re-shape the .wav file. Used with discretion, that can really help punch up a mix.
    I'll check this out. Thanks!

    I'll wrap this reply up with a general observation which isn't all that flattering to Finale and other notation programs. I've heard some excellent recordings from Finale users who are adept at using Human Playback - But I still hear mostly mediocre recordings from Finale users, and I'm not sure why. I don't know if they're not using Human Playback, or are just using it clumsily. But to my ears, the hallmark of the typical Finale rendering is that it's too tame. I chalk that up to major issues like how it's really only 6 dynamic levels that people regularly use for volume control - p through ff - compared to the 127 levels of volume available in MIDI. Power users like Forum member David "Et Lux" Sosnowski always say that the secret of their power is in inserting thousands of commands that they make invisible in the score. Incredible - I can't imagine.
    I use a few "tricks" in Finale to expand the dynamic level. First, I copy several of the dynamics and then alter their velocity. In Finale, dynamics are 13 clicks apart. For instance, mf is a velocity of 75 (0 to 127), and f is 88. When I copy the mf dynamic, I rename it and assign it a higher velocity. I usually have to do this with the GPO4 clashed cymbals sound. I usually have to tone the suspended cymbal sound way down, so I do the opposite with that. In this way, as long as I keep track of which dynamics have which sound level, I can use the "correct" dynamics in the score, but they'll play at the levels I really want.

    Second, I usually end up inserting a lot of hidden expressions for all sorts of things--dynamics, tempo markings, etc., to get a piece to sound right.

    I suggest, Art, that you start wading into your Sonar program. It very well could be that next level of sound production you're looking for. I remember when I first started using Cakewalk/Sonar - it all looked like an alien landscape to me. I thought the Piano Roll View must be something dumbed down for people who couldn't read music. Well - I got into it, and came the day it Clicked for me - This was letting me work with music in its purest form - sound - as compared to working with what I think are the severe limitations of only the notation which I consider basically a by-product of music. For me now, to see all my instruments' notes interacting in that multi-layered PRV view is the very heart of what I'm working on. I can See the sonic relationships between not just the notes, but their lengths, their velocity values, and how all the MIDI controllers I'm using are working on them.And so on - There's probably more you can do with Finale - but I know from experience that there's Definitely a tremendous more you can do with your music in a program like Sonar.
    I hear you on this. Is there a particular tutorial that would be good--a "from the beginning" one?

    Randy, thanks so much again. This is all VERY helpful--lots and lots to digest! I've copied your two messages to a Word file and placed the document in my Finale Help Files folder for future reference!

    Art


    Arthur J. Michaels
    https://www.facebook.com/composerarthurjmichaels

    Finale 2000 through Finales 25.4 (currently using Finale 25.4)
    Garritan COMB2, GPO4, GPO5, Audacity 2.1.3
    Core i7 860 @ 2.80 GHz, 8.0 GB RAM, Windows 10 Home Premium x64
    Dell 2408 WFP, 1920x1200
    M-Audio Delta Audiophile 2496
    M-Audio AV-40 monitors

  8. #8

    Re: Muddiness vs. Clarity

    My two cents (because it tends to be the problem that I see with my own charts) is to check your orchestrations first. While all ideas above can be helpful in dealing with the peculiarities of sample libraries and cleaning up a mix, it's of limited value if the parts aren't voiced effectively. Because the only instrument I played was the piano, I tend to put too much harmony in the high to mid bass range. A triad in this range on the piano or harpsichord is a lot lighter and more transparent than three trombones playing the same chord, whatever the volume level. Because I am using a string bass, rather than an electric guitar, in my score, I am reluctant to let it play the bass line alone in the heavier numbers. I double it (at the unison or octave) with a bass wind instrument (or the keyboard, if it is used in that number). But then I have to be a lot more careful about what instruments I put above it and how they are voiced.

    I did get some good advice, early on, from Vince Corozine's book "Arranging Music for the Real World" and then from our subsequent e-mail correspondence, about using too much harmony in a piece. Until then, I had always assumed that the more I could harmonize, the better. His idea is, if one section (say, brass, for example) is fully harmonized (i.e. every instrument playing a different note) then the others sections should be doing something completely different (such as the strings playing the melody in unison and the woodwinds doing fills, countermelodies, and/or ornamentation).

    Whenever I am "stuck" for how to fix something that sounds either too muddy, or simply too bland, I go back to two sources: RK's "rules" and Vince's advice. Either one or the other usually clarifies things for me and gives me an idea as to how to begin to straighten things out. In a muddy mix, this usually means less harmony in a section or in the bass range (and occasionally moving it somewhere else).

    For those who are interested, there are several articles about arranging on Vince's site at www.vincecorozine.com.

    I found his article on "Getting Out of the Gray" (titled "Arranging Tips" on his web site) particularly helpful.

    Allegro Data Solutions

  9. #9

    Re: Muddiness vs. Clarity

    That's actually really good advice ejr. A common mistake is to have triads of chords voiced a couple of times in one family. Or too heavily across the orchestra.

    Another problem is voicing the triad in too low a register. Anything below C2 is too low, and depending on the instruments, below F2 or there abouts can be too low as well.
    Website:
    www.grahamplowman.com
    YouTube Music:
    My Channel
    Twitter:
    @GPComposer
    Facebook:
    Facebook

  10. #10

    Re: Muddiness vs. Clarity

    Excellent, Rich - and Graham! (you posted after I started writing this post) - Thanks for adding your expertise to the discussion Art started.

    Rich, your example of making room for both the bass trombone and tuba is a perfect one. It's like in pop music where they're always making the decision of which to have the bassier sound, kick drum or bass. The period during which the bass guitar was given the lead, that distinctive "click" kick drum was developed by thinning out so much of its bass content.

    There's a program we discussed a long time ago here - "Har Bal" (unfortunate name--always makes me think of "hair ball," as in --well, you know,--has to do with Cats)--and that refers to "Harmonic Balance." It's a very sophisticated spectrum analyzer that helps you get your mix closer to the sonic profile of professionally produced music. The use of most any spectrum analyzer can be helpful for seeing the frequencies that are popping out and perhaps skewing the sound of a mix.

    As for Normalization - and Graham has weighed in on that this morning also, it's a perfectly safe tool to use when you have a mix that isn't filling up the full audio range. Fear and loathing of it is almost in the "urban legend" category of the music world. It doesn't change the sound or quality of a recording in the least, and seems to often be confused with compression which Does change the dynamic relationship between high and low volumepassages.

    Normalization maintains the exact same relative levels and simply brings the whole track up to either 0 dB or a fraction under that (recommended) if you set the control that way. Better to bring up the volume than be stuck with an otherwise good mix which is wimpy in volume. My advice is to always, always, always use Normalization if your 2-track master needs it.

    Being able to get levels perfect straight from Finale's mixer, without going in the red, is great. I just wouldn't spend too much time struggling to get that level since a slightly low mix is so easily fixed up in an audio editor. And in DAW software, the aim is to Not have your mix at full volume. You need head room for mastering, at the end of which the overall volume is brought up to the max. If you're working with a pro mastering engineer, he's going to insist that your master leaves him plenty of dBs to play with. When we use a program like Audacity at the end of our production chain, we're trying to be our own mastering engineers, and to have some head room can really come in handy.

    Quick example of powering up a mixed down master - If we have several dBs of head room in the track, we have the chance to make our ff passages even more dramatic, because with a volume envelope we can produce an even bigger crescendo than we were able to do with MIDI and mixing. Then when we've tweaked volumes throughout the track, run a DC Offset filter several times, perhaps worked more with EQ (especially if a Spectrum Analyzer has revealed chances for improvement) then we want to bring the master up to a bit under 0 dB. That last step of increasing the volume could be done with a Gain control, simply called "Volume" in SoundForge, but it would probably be a process of trial and error, trying different levels until you hit on the right one without going into the red - When the exact same volume correction, with no fear of going into clipping, could have been done in a second with a good Normalization plugin.

    The pros don't use Normalization and look down on it, but that shouldn't give us the impression that there's something inherently wrong with it. PhotoShop pros sneer at the simple "brightness/contrast" filter, because they've mastered how to use the more complex "Levels" and "Curves" tools available in PhotoShop. But not everyone has an interest in becoming a PhotoShop pro, so the "b/c" filter is quick and efficient for the average user to grab and spiff up their snapshots.

    Same with sound - There's a limit to how far most composers want to go towards becoming "pro engineers" - Like Art, they just want to make decent demos of their work. A simple process like Normalization, which does Nothing to change the sonic quality of a recording, is the perfect kind of tool for such people--and that's Us!

    Randy

Go Back to forum

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •