Re: a joint letter on low latency and Linux

From: Sean Hunter (sean@uncarved.com)
Date: Sun Jul 02 2000 - 05:26:00 EST


On Fri, Jun 30, 2000 at 12:14:34PM -0400, Gregory Maxwell wrote:
> On Fri, 30 Jun 2000 yodaiken@fsmlabs.com wrote:
>
> [snip]
> > > there is no queue. this is real time FX processing or synthesis,
> > > remember ?
> >
> > I'm confused. I thought that buffering was possible.
> > Get data -> send it to processor -> output result
> > couple of seconds of
> > buffer here.
> [snip]
>
> Have you ever played a piano? Would it have been hard (impossible?) to
> play if it took a "couple of seconds" to hear the sound from the key you
> just pressed?

Have _you_ ever played a piano?

I respectfully submit that if you had, you'd know that there is a
delay between pressing a key and the note sounding. I'd have to
measure to be sure how long it was, but I'd guess its probably more
than 5msec. All muscial instruments have this delay, and musicians
learn good "time" by adjusting to the length of time it takes for an
instrument to "speak". Just because you get used to it happening
doesn't mean it doesn't happen.

Also, you are not telling all about the way in which audio signal
processsing works. While the results need to be heard fast, they are
most often (eg in the case of reverb, chorus and delay) played after
the original sound anyway, so some buffering should be possible.

There is also no reason for there to be a realtime constraint on
signal processing in the case of sound recording, as a "ghosted"
version of the processed sound can be used for playback, while the raw
audio is recorded and the real signal processing can be done and
recorded later with no time constraints (almost like graphic
rendering). If the realtime "ghosted" version (which would be
generated on the fly, and be recorded) is not as high-quality as the
final version, it would not be a problem, as it would only be used in
the muisicians cans during the recording, and so could be quite lossy
before it became noticable.

Musicians are perfectly accustomed to working with a "raw" mix in
their cans, and hearing the far-superior results in the mixdown, and
this is a natural extension of that process. It also tallies better
with the real process in the studio, with temporary effects used in
the cans, but the final effects chosen during the lengthy mixing
process after all sund has been recorded.

When it came to playback after the original recording was done, an
asynchronous process or thread could be used to read ahead of the
playback and "render" the raw audio into processed sound buffers which
would play back just as the "actual" playback off disk happened. This
would mean that you could provide nice big buffers for your
third-party plugins, and wouldn't need realtime for them at all.

The case of making a instrument (eg a syth) or an effects processor
for live use (eg a guitar effects unit) is pretty different from this,
and clearly (in my mind anyway) is a hard realtime project.

Sean Hunter

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jul 07 2000 - 21:00:10 EST