By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,588 Members | 1,920 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,588 IT Pros & Developers. It's quick & easy.

Sound programming

P: n/a
Hello

I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.

So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.

Thanks!

Jul 7 '08 #1
Share this Question
Share on Google+
4 Replies


P: n/a
"kid joe" <sp******@spamtrap.invalidwrote in message
I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.

So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.
It is rather more involved than you think.

The problem is that audio devices need to be fed a continuous stream of raw
bits, whilst generally you want the processor to spend most of its time
dealing with the rest of the program, like moving space invaders about the
screen.

So unless you want to do difficult multi-tasking programming at the device
level, you need a certain layer of abstraction. the question then becomes
"which one?". For space invaders you can probably get away with an interface
that says "play sound". It puts a bleep or an explosion into the audio
queue, return scontro, to you almost immediately, and a millesecond or so
later you'll hear the sound on the speakers.
For a more advanced use of audio, this isn't sufficient. You'll want to be
able to cancel jobs, to submit long sequences instead of tiny clips, to
change the volume, to stream sound in from a backing store, maybe to
synthesise samples on the fly.

So it becomes difficult to know what level of abstration to use. Too low and
you're doing messy parallel programming, too high and you're calling Midi
instruments and the like when you just want to say "play this".

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jul 7 '08 #2

P: n/a

"Malcolm McLean" <re*******@btinternet.comwrote in message
news:e8******************************@bt.com...
"kid joe" <sp******@spamtrap.invalidwrote in message
>I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.

So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.
It is rather more involved than you think.

The problem is that audio devices need to be fed a continuous stream of
raw bits, whilst generally you want the processor to spend most of its
time dealing with the rest of the program, like moving space invaders
about the screen.
yes, and sadly, getting this write without blocking the app (or causing
annoying auditory artifacts) is a little harder than it may seem (or at
least for single-threaded apps).
So unless you want to do difficult multi-tasking programming at the device
level, you need a certain layer of abstraction. the question then becomes
"which one?". For space invaders you can probably get away with an
interface that says "play sound". It puts a bleep or an explosion into the
audio queue, return scontro, to you almost immediately, and a millesecond
or so later you'll hear the sound on the speakers.
For a more advanced use of audio, this isn't sufficient. You'll want to be
able to cancel jobs, to submit long sequences instead of tiny clips, to
change the volume, to stream sound in from a backing store, maybe to
synthesise samples on the fly.

So it becomes difficult to know what level of abstration to use. Too low
and you're doing messy parallel programming, too high and you're calling
Midi instruments and the like when you just want to say "play this".
a generally workable approach I had found was to implement a mixer, which
created temporary "mix streams". these streams basically just provided a
means for the mixer to demand a certain number of samples. the streams
themselves had a various info (current origin, spatial velocity, ...)
allowing for effects like doppler shifting (as well as just the "things are
quieter when far away" effect).

these were typically structs making use of callbacks.

playing a sound typically involved creating a stream with the right
properties (handled automatically by various "play a sound") functions, the
stream typically automatically destroying itself when done.

the interface also worked fairly well with playing audio from videos, and
from songs in the form of mp3s (typically, sound effects are just buffered
into ram, but songs are better streamed since they can take a decent-sized
chunk of memory to store).

whatever else can be played so long as the right callbacks could be
provided.
note: callbacks may be passed a chunk of "user data" (as well as a stream
id), which is another useful trick here, and is put in the struct when
creating the stream. this is usually a pointer holding whatever it is the
stream-specific functions feel is important (GTK does similar...).
also note:
as an interesting effect of having doppler shifting and other effects, not
all of the streams may be strictly temporally in-sync, since moving away
from a stream causes it to be played slower and moving towards it makes it
play faster (I make sounds just "cut out" near mach-1, since otherwise there
are annoying zero-division issues).

a related trick was to add a delay calculated from the distance of the sound
from the camera, such that, say, a distant explosion will take a little
while for the sound to hit (first we see the explosion, and then the sound
hits a short time later).

note that as an effect of the geometry: when one is far away from an audio
source they are out of sync with it (temporally and possibly also in terms
of rate), but as they move closer the sync is regained, such that upon
reaching the source it is playing more or less in realtime (and other
sources they were nearby originally have moved out of sync).

....

one notable lacking effect though is echo-modeling (or dealing with sounds
being otherwise blocked or distorted by geometry), since this is
computationally expensive (an "echo effect", "dampen effect", ... being much
cheaper).

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jul 8 '08 #3

P: n/a
kid joe wrote:
Hello

I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.

So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.

Thanks!
Sound programming in C is involved and highly system dependent. A cross
platform helper library would _be_ "a bunch of garbage added in there", and
would not (nessescarily) reflect the way the sound hardware works in
practice.

The least crufty library I know of only does sound output --
http://xiph.org/ao/

If you are interested in sound synthesis or analysis I would recommend Chuck
instead -- http://chuck.cs.princeton.edu/

Of course, there is always Pure Data (http://puredata.info/) or its
commercial sibling, Max/MSP (http://www.cycling74.com/)

-Sigmund
Jul 8 '08 #4

P: n/a
"kid joe" <sp******@spamtrap.invalidwrote in message
news:pa****************************@spamtrap.inval id...
Hello

I've got interested in learning some basic sound programming bits in
C... mainly I want to know how to go about accessing the sound devices -
reading from them mainly - in windows and linux... I'd kind of like to be
able to do it without a whole bunch of extra garbage added in there - by
this I mean that I know in windows there are a million sound programming
packages that make the whole process "easier" - there are also a few in
linux but I think the raw stuff I'm interested in understanding is a bit
more simple in linux b/c of the way devices work in it.

So if anyone can point me at a place to start - maybe some really raw
source code for linux and windows - I would really appreciate it.
http://sourceforge.net/search/index....de=0&limit=100
** Posted from http://www.teranews.com **
Jul 9 '08 #5

This discussion thread is closed

Replies have been disabled for this discussion.