Re: Is something wrong with 2.0.33??

Mark Gray (markgray@iago.nac.net)
08 Jan 1998 04:52:28 -0500


Chaiya Intasoie <chaiya@rs.mahidol.ac.th> writes:

>
> Hi;
> Please forgive me if my question is off the topic.
> I am just compiled kernel to 2.0.33 last 2 weeks. I noticed that
> when we try to download a big files off the net. It looks good at first.
> But after that when a download getting bigge. It just hung up like for a
> long time. I keep doing this for several time but get the same result.
> Any wrong with my kernel. Or do I need some patch?
>
> Thank you.
>
> Chaiya Intasoie

I have seen this behavior with earlier kernels as well, and it appears
to me to be traceable to one of two possible things:

1. A compression disagreement on highly compressed files between your
ISP's modem and your modem --- there have been times when I have found
it necessary to FTP certain files to my shell account, uuencode them
and then FTP them to my home computer otherwise they would never make
it. This has not happened recently, and might have been fixed by
their supplier -- Lucent.

2. I have watched a tcpdump during other transfers and have found
that my ACK's after having lost a packet never seem to make it to the
site I am FTPing from. Doing a traceroute to the site at this time
will often kick the ftp going again (and I have a little shell script
I use which watches the file I am getting and does a traceroute should
it ever stall.) Idle speculation: the route back to the site has
gotten clobbered and the ICMP package has a better chance discovering
a new route than does a TCP ack. Now I constantly run this in a very
large rxvt:

tcpdump -e -l -vv -s 256 -i ppp0 |tee --append /tmp/tcplogfile

(I do this mostly because I am a bit paranoid and like to know when I
am being probed.)

This is mostly an additional data point I guess (I would like to see
it solved.) Good luck.