• Register
  • Help
Results 1 to 9 of 9

Topic: non-realtime sound synthesis

  1. #1

    non-realtime sound synthesis

    Does anyone know of any programs that let you synthesize high quality wave files to be used in a sampled instrument? Most work in synthesis seems to be geared towards realtime use( understandably ), however if one could generate wave files, then a realtime gig could be made. I have in mind something along the lines of physical synthesis, which I know can be quite processor intensive for a detailed simulation.


  2. #2

    Re: non-realtime sound synthesis

    Well, if you want to go really hard-core and you know C programming, take a look at C-sound.

  3. #3

    Re: non-realtime sound synthesis

    Apparently Reaktor is the poodles plums for this kind of thing

  4. #4

    Re: non-realtime sound synthesis

    >I have in mind something along the lines of physical synthesis, which I know can be quite processor intensive for a detailed simulation.<


    I don\'T exactly understand the thing about wave-files etc. but in case you just look for PM-synthesis take a look at the Korg Oasys card. Brilliant PM-Bass/Guitar/nice flutes etc. As it is hardware based it is better than software (regarding being processor hungry etc.)

    kind regards



  5. #5

    Re: non-realtime sound synthesis

    \"Virtual Waves\" by Synoptic sounds like exactly what you\'re looking for. I haven\'t used it myself but the write ups on it suggest it is capable of very complex sythesis in a wide variety of styles and can work non-real time to fully utilize the power of your computer. Check it out at www.synoptic.net

  6. #6

    Re: non-realtime sound synthesis

    Here\'s a suggestion for a potential extension of Gigasampler\'s (Gigastudio\'s) functionality which differs a bit from what Duncan asked for. Most (if not all) of the quality problems and voice number limitations in Giga stem from the fact that it\'s a realtime software sampler. You can tell from hearing pops etc. when operating near the voice limit. (This is not surprising when you consider the amount of data that need to be streamed from the disk(s) in realtime).
    On the other hand, the final goal of most users is to record high quality arrangements into a wav file, and this is really not a realtime task.
    So why not give Giga a MIDI queue and (OPTIONALLY) let it generate the final recording product at its own pace, offline.
    This would be like applying complex effect plugins offline in Wavelab.
    I see no reason why a limitation to 96 or 160 voices would be necessary if this were done.
    Plus you could use an (almost) unlimited number of samples in parallel, even with small RAM.

    By the way, I realize that this is completely off-topic. Sorry about that!


  7. #7

    Re: non-realtime sound synthesis

    Thanks for the suggestions. I tried out synoptics virtual waves demo, but scratching
    the surface it did not seem to cut it for constructing natural sounding instruments. I haven\'t tried the tassman soft synth, but it looks promising( although I hear users complaining they have not had much luck setting up natural sounds, such as bowed strings ). I may buy sonar to get it( I\'m currently using cakewalk 9 ). I do program, however, and perhaps C sound is the way to go, although I can imagine it would eat up a huge amount of time.

    Ultimately I could see physical sound synthesis replacing sampling when software
    is smart enough and computers are fast enough. Gigasampler with its disk streaming and sample dimensions really weakens the need for this kind of synthesis, however, by making sample memory not such a big issue.

    I like the flexibility, however, of being able to design a sound along with its response to expression then \"bake\" it as a gig for fast performance. Thus one could use the fanciest algorithms when designing the sound.

    Some effects like legato, portamento, cross resonance and string damping would be trickier to bake down to samples, although not impossible.

    Joe\'s offline gig render suggestion should get added to the wishlist thread,(if its not on there already).

    There are lots of similarities between the world of music sequencing and the world of
    computer animation( in which I work ). We use hardware to help display scenes as one constructs them, but renderer them in batch mode using fancier methods( that can take hours a frame ). If a scene gets too complex, animators will render it in separate layers and composite. This is similar to mixing down tracks on a sequence with high polyphony. We use physical force models to get natural motion in some cases, but these can be slow and for many things keyframing is easier. I suppose keyframing is a bit like entering notes into a sequencer by clicking. We also use realtime performance capture devices, which is like capturing a midi of a performance. Music work, however, tends to be much more real time than visual work. We break things into 3 processes: Modeling, Animation and Rendering. In music playing a chord constructs a model( a set of voices), playing a sequence of chords defines the \"animation\" of the voices, and the sound created by the voices is the \"rendering\".
    Thus these three process are simultanious in music.(oh well, back to work)


  8. #8

    Re: non-realtime sound synthesis

    Hi Duncan,

    I\'ve been thinking about the similarities recently aswell!

    I\'m currently working with particle systems. For those who don\'t know... these are comprised of thousands of points which the computer throws out into 3d space with certain velocity and weight calculations. In order to control these thousands of points its important to write mathematical formulas. Because it would be silly to go in and move every one of the thousands of points one by one it is better to give the computer an algorythm by which to calculate the placement of each point automatically. In its simplest form this algorythm might be `position = sin(time)`.

    Believe it or not there is a marked similarity to music here. Imagine, instead of thousands of points, hundreds of notes or waveforms being thrown out. Each has its own velocity, pitch, etc. Of course one way in which we control this excess of notes is by playing them ourselves and imbuing them with our performance. But the computer can change them aswell through LFO. When we set pitch LFO to sine wave we are essentially saying... `pitch = sin(time)`. Notice the similarity?

    So expressions (mathematical formulas) do for particles what LFO does to Midi data.

    Yet there is not a single audio package that I know of for which I can write expressions to control midi data! Both Duncan and I can begin to imagine the kind of crazy sonic possibilities which would be acheivable if there was just a simple expressions editor in GigaStudio...

    filter = sin(time*pitch) + midiCC_18;
    release = (attack*sampleStart) / modWheel;

    if(cutoff <= 0.5) resonance = abs(cutoff - 1);
    else resonance = cutoff * midiCC_20;

    such a simple concept but boy it\'d blow the whole thing wide open. After expressions an LFO with sine just looks neanderthal.
    Perhaps I\'ll enter it on the wish list.

  9. #9

    Re: non-realtime sound synthesis

    You should be able to do a bit along these lines by using Cal in cakewalk to program your expressions.

    Or, midiyoke allows you to program your own filters onto midi data and there is nothing stopping you from putting any expression in there.

Go Back to forum

Tags for this Thread


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts