"Malcolm McLean" <re*******@btin ternet.comwrote in message
news:e8******** *************** *******@bt.com. ..
"kid joe" <sp******@spamt rap.invalidwrot e in message
>I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.
So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.
It is rather more involved than you think.
The problem is that audio devices need to be fed a continuous stream of
raw bits, whilst generally you want the processor to spend most of its
time dealing with the rest of the program, like moving space invaders
about the screen.
yes, and sadly, getting this write without blocking the app (or causing
annoying auditory artifacts) is a little harder than it may seem (or at
least for single-threaded apps).
So unless you want to do difficult multi-tasking programming at the device
level, you need a certain layer of abstraction. the question then becomes
"which one?". For space invaders you can probably get away with an
interface that says "play sound". It puts a bleep or an explosion into the
audio queue, return scontro, to you almost immediately, and a millesecond
or so later you'll hear the sound on the speakers.
For a more advanced use of audio, this isn't sufficient. You'll want to be
able to cancel jobs, to submit long sequences instead of tiny clips, to
change the volume, to stream sound in from a backing store, maybe to
synthesise samples on the fly.
So it becomes difficult to know what level of abstration to use. Too low
and you're doing messy parallel programming, too high and you're calling
Midi instruments and the like when you just want to say "play this".
a generally workable approach I had found was to implement a mixer, which
created temporary "mix streams". these streams basically just provided a
means for the mixer to demand a certain number of samples. the streams
themselves had a various info (current origin, spatial velocity, ...)
allowing for effects like doppler shifting (as well as just the "things are
quieter when far away" effect).
these were typically structs making use of callbacks.
playing a sound typically involved creating a stream with the right
properties (handled automatically by various "play a sound") functions, the
stream typically automatically destroying itself when done.
the interface also worked fairly well with playing audio from videos, and
from songs in the form of mp3s (typically, sound effects are just buffered
into ram, but songs are better streamed since they can take a decent-sized
chunk of memory to store).
whatever else can be played so long as the right callbacks could be
provided.
note: callbacks may be passed a chunk of "user data" (as well as a stream
id), which is another useful trick here, and is put in the struct when
creating the stream. this is usually a pointer holding whatever it is the
stream-specific functions feel is important (GTK does similar...).
also note:
as an interesting effect of having doppler shifting and other effects, not
all of the streams may be strictly temporally in-sync, since moving away
from a stream causes it to be played slower and moving towards it makes it
play faster (I make sounds just "cut out" near mach-1, since otherwise there
are annoying zero-division issues).
a related trick was to add a delay calculated from the distance of the sound
from the camera, such that, say, a distant explosion will take a little
while for the sound to hit (first we see the explosion, and then the sound
hits a short time later).
note that as an effect of the geometry: when one is far away from an audio
source they are out of sync with it (temporally and possibly also in terms
of rate), but as they move closer the sync is regained, such that upon
reaching the source it is playing more or less in realtime (and other
sources they were nearby originally have moved out of sync).
....
one notable lacking effect though is echo-modeling (or dealing with sounds
being otherwise blocked or distorted by geometry), since this is
computationally expensive (an "echo effect", "dampen effect", ... being much
cheaper).
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm