Re: Behaviour of OOB in TCP ?

Craig Milo Rogers (rogers@isi.edu)
Thu, 29 Apr 1999 11:18:00 -0700


>It seems like the behaviour the TCP stack in Linux broken (or I missed
>something in the RFC). In that case, the fix would naturally be to change
>the code to either (1) work like BSD and remove the byte from the stream
>or (2) keep multiple OOB pointers (which is expensive and complicated).
>
>Comments ?

Given the caveat that I'm not up to date on documented "best
practice", and maybe I'm full of it and my memory is mush:

Linux is right, and the rest of the (BSD) world is wrong.
Nyah, nyah nyah, nyah, nyah! Don't you love it when you can say that?
:-)

More seriously, the rsh/rlogin protocol *requires* the BSD
semantics to work reliably (this is why I tell people that rsh/rlogin
aren't TCP based -- they require a transport layer feature that's not
part of TCP). On the other hand, they *are* useful applications
(well, on an isolated net, or behind a firewall), and as a practical
matter it makes sense to support them... perhaps when BSD socket
compatability mode is set. I vaguely recall that there's an
off-by-one error somewhere in this, too.

<historical comment>
The problem is that TCP was developed using the concept of
"urgent" data, rather than "out of band" data. The "urgent" flag was
intended to mark the presense of urgent data, not to delimit the
urgent data; that was supposed to take place in the stream itself.
The BSD protocol designers violated that design criterion.

The BSD designers (Bill Joy et al.) were working on a kernel
inteface that could encompass many different network transport layers,
such as TCP/IP and ISO/OSI. They designed portable applications to
run on top of that interface, and they wanted to use "out-of-band"
data as a unifying concept, not "urgent" data. They even went so far
as to implement OOB data for TCP in the BSD network stack, using
nonstandard TCP options, and released a version of Unix with it,
without coordinating with the official IP/TCP developers, first. This
move was widely viewed with disfavor in the IP/TCP development
community.

In response, the BSD developers said <<we have found a way to
implement our software on TCP without using our nonstandard options>>.
The fact that they did this by subtly breaking TCP itself was not
immediately understood by the IP/TCP developers.

Of course, BSD could have achieved OOB data by using a second
TCP port (or something even more extreme, such as a UDP port). This
would have worked well with the TCP spec, but would have consumed a
second port; even in 1981, it was recognized that the 16-bit port
space was a bit tight if you made frequent connections. Or, if
rsh/rlogin had used a count or escape sequence to delimit out-of-band
data... but I believe that they were trying to avoid the inefficiences
of that approach.

</historical comment>

Craig Milo Rogers

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/