Re: Behaviour of OOB in TCP ?

Craig Milo Rogers (rogers@isi.edu)
Thu, 29 Apr 1999 12:26:17 -0700


>> It doesn't work that way; OOB data isn't mapped well onto TCP's notion of
>> urgent data, so it should not be used as such. Use a separate socket for
>> real out-of-band data.
>
>Basically you are right. But the real problem with the Linux way of
>handling is this: if your application has an internal protocl (over TCP)
>, like in a client-server paradigm, and it ALSO uses urgent data (which
>is translated to OOB) -- then there is a problem ! Suppose the server
>uses OOB to notify the client of some event. If that byte get into the
>"normal" stream - it corrupts the "normal" data and breaks the internal
>protocol.

As I said, before: TCP, as originally specified, does not
directly support out-of-band data; it supports "urgent" data (using
its own semantics for "urgent"). If an application has "urgent" data
(matching the TCP semantics), then it shouldn't attempt to translate
it to OOB data (as you suggested above) before sending it via TCP! If
an application wants out-of-band data, it *must* encode the data
in-band, using an appropriate mechanism (escape sequence, octet count,
flattened structure). Please read the TELNET Options RFCs for details
on one way to do this (note: if the BSD folk had kept the TELNET
option encoding, but not the negotiation, the world might have been a
happier place).

A simple way of putting this is that multiplexing "OOB" and
"normal" data in a single stream is simply not a service provided by
TCP (as originally defined). An application that expects TCP to
provide this service is mistaken.

Craig Milo Rogers

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/