Re: Full duplex using 2 sound cards.

Zygo Blaxell (
28 Jan 1997 14:49:57 -0500

In article <Pine.LNX.3.95.970129130904.5460B-100000@batman>,
Hannu Savolainen <> wrote:
>On Tue, 28 Jan 1997, Ian Main wrote:
>> Hi, as you guessed, I'm trying to get full duplex sound using 2 sound
>> cards. One is a mad16, and the other a sb16. Both work fine by
>It's very tricky to do recording of CD quality audio to a disk file in
>Linux. The process just sometimes get's blocked for disk I/O for too long
>time. This seems to happen every time bdflush/sync starts posting dirty
>buffers to disk. When using 44.1 kHz/16 bits/stereo format the maximum
>possible DMA buffer (64k) becomes full after 0.37 seconds which causes an
>overrun. However with a fast (SCSI) disk it's possible to record without

I haven't managed to actually do this. The fastest machine I tried was
a P133 with 4-gig Micropolis fast-wide-SCSI on an AHA2940 with 128 megs
of RAM in single-user mode. It averaged ten buffer overruns per hour
if I made the machine think it had 16 megs of RAM, and the more RAM I tried
to use (using 'mem=' at the LILO prompt), the worse the performance got.
I've tried this with kernels in the 1.3.30's, 1.3.50's, 1.3.80's, and 2.0

The slowest machine I've tried to do this with was a 386/40 with 4 megs
using an IDE drive literally pulled out of a trash bin. It averaged
twenty buffer overruns per hour, only twice as many as the P133, despite
being nearly ten times slower in both disk and CPU performance than the

To me this indicates that the problem has absolutely nothing to do with
CPU or disk speed. Both machines have I/O performance more than three
times fast enough for the job (CD rate audio is *only* 176 KB/second,
and the IDE drive from the trash bin could do 730K/second writing with
iozone), and both can do the same recording job under DOS without any
overrun problems.

>To be able to do simultaneous recording and playback reliably you should
>implement extra buffering in the application. This probably means that you
>should use multible threads (or processes communicating through a shared
>memory segment). One thread reads data from the device as fast as possible
>and stores it into a buffer (FIFO). The second thread reads data from the
>buffer and posts it to the disk file. For simultaneous playback you
>probably need more threads.

On anything faster than a 486-33, I've found that simple pipes are
sufficient, as long as nothing too intensive is running on the machine.
You just need to implement a few hundred K of ring buffer and run a
simple select() loop. I have a perl script that does this using three
processes with pipes between them, and while recording I do 'tail -f'
on the sound file to play over a network to another machine using ssh
(yes, encrypted) for monitoring the recording in real time. On a 486/33
I would have to use an unencrypted connection, but everything else is
the same. My point is that I can waste a *lot* of CPU time because it's
not the scarce resource here.

I've found by experimentation that only about 50-100K (depending on
the disk's seeking speed) is necessary to handle most of the read/write
delays; using more than 200K is totally unheard of in the several hundred
recording hours that I've logged under Linux. If the kernel had another
few hundred K of internal buffer that it could use for storing sound,
it would simplify applications a lot, especially considering that the
kernel can respond to the sound card in real time while processes might
be delayed by the scheduler. On a machine dedicated to this sort of
application the extra RAM wouldn't be missed at all, and it can probably
be released when not actually recording.

I record CD-rate audio at home while compiling huge C++ applications
(huge meaning that cc1 gets over 32 megs in RSS) using that ugly perl
script I mentioned above. As long as 'make' and the compiler are niced
to 10 or higher, the recording runs for hours without overruns.

For reasons unknown I absolutely *must* kill (vixie) crond, even though
there are no cron jobs to run and all it's doing is calling stat()
on a FIFO every minute; I also can't use Netscape on the same machine,
but I can understand why that is so. Other than that I can run an X
server and several clients and generally do a day's work on the same
machine that is recording CD-quality audio. It also works when the sound
card and disk space are separated by a reasonably busy (4 gigs/month)
office ethernet as long as nobody tries to copy a huge file.

>Btw, allowing some kind of buffer cache write through would be an usefull
>feature. Some applications (like data logging) work better blocks written
>by an application are sent to the disk immediately. In this way the disk
>I/O caused by the application is distributed over longer period of time
>and it doesn't become an bottleneck. This kind of mode could be turned on
>using fcntl() or something like it so that it doesn't disturb normal
>applications. Is anybody interested in implementing this kind of feature
>in Linux?

Write-through doesn't seem to help; actually it seems to make things
a lot worse because now the writing process *must* wait for the disk,
instead of allowing the kernel to complete the write asynchronously.
We hunt down programs that use it (e.g. syslogd) and kill it so that our
machines stay usable. It seems to be designed to trade reliability for
performance, especially considering that the written data is stored in
the buffer cache anyway.

The problem with storing sound files has nothing to do with writing
performance. The extra head motion caused by *reading* the block
bitmaps to figure out where the next disk block should be allocated
causes the delays that cause buffer overruns. If the block bitmaps are
pre-fetched when extending a file then everything might work better.
A few K of extra buffering here and there also does the job well.
A final performance suggestion is to run '/sbin/update -6 3000' to
lengthen the time between metadata writes from 5 to 30 seconds, which
seems to improve performance when writing large files.

Zygo Blaxell. Unix/soft/hardware/firewall/security guru. 10th place, ACM Intl 
Prog Contest, 1995. Admin Linux+Solaris for food, Tshirts, anime. Pager: 1613
7608572. "I gave up $1000 to avoid working on windoze... *sigh*"-Amy Fong. "smb
is a microsoft toy, like a "child" protocol that never matured"-S Boisjoli.