Re: Behaviour of OOB in TCP ?

Oren Laadan (orenl@cs.huji.ac.il)
Thu, 29 Apr 1999 20:22:15 +0300 (IDT)


Hi,

> >Under BSD the first OOB byte is lost forever, and process B will
> >read 8 bytes before stopping at the second OOB, and nothing else (but
> >naturally the second OOB itself with MSG_OOB flag in recv()).
> >
> >Under Linux, the first OOB byte is inserted into the stream (!) and
> >thus process B will get 9 bytes before the second OOB. This would
> >still happen EVEN IF process B ALREADY read the first OOB byte but
> >did not yet consume the data prior to it (thus - the first OOB is
> >received twice: once as OOB and once in stream).
>
> This sounds dubious, but BSD's behaviour oflosing the first urgent
> byte isn't always desired either. On the other hand, any application
> not tolerating loss of urgent data will probably use SO_OOBINLINE
> (telnet, for example). It is hard to say if the Linux behaviour is
> erroneous
>
> >It seems like the behaviour the TCP stack in Linux broken (or I missed
> >something in the RFC). In that case, the fix would naturally be to change
> >the code to either (1) work like BSD and remove the byte from the stream
> >or (2) keep multiple OOB pointers (which is expensive and complicated).
>
> It doesn't work that way; OOB data isn't mapped well onto TCP's notion of
> urgent data, so it should not be used as such. Use a separate socket for
> real out-of-band data.

Basically you are right. But the real problem with the Linux way of
handling is this: if your application has an internal protocl (over TCP)
, like in a client-server paradigm, and it ALSO uses urgent data (which
is translated to OOB) -- then there is a problem ! Suppose the server
uses OOB to notify the client of some event. If that byte get into the
"normal" stream - it corrupts the "normal" data and breaks the internal
protocol.

In this sense, it is better to act like BSD: losing the VALUE of that
first OOB, is ok, as long as the NOTIFICATION of the urgent event is
not lost. Normally this is how urgent/OOB data is implemented in TCP
(almost no implementation guarantees delivery and correct handling of the
data, but only the notification). Therefore, programmers EXPECT this kind
of behaviour. I wouldn't expect getting more data in the normal stream
than was actually sent !

[and this is without argueing about the necessity/goodness/whatever of
using OOB for such purposes in the application].

Oren.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/