> > You can mathematically prove that an exponential backoff is the
> > minimal safe backoff for congestion avoidance
>
> You can mathematically prove many strange things when you make
> incorrect assumptions. If that proof were relevant, the Internet
> would not suffer from constant congestion.
:) [but true :(]
> Oddly however, the Internet suffers from constant congestion.
> You can't have a useful proof that contradicts the real world.
>
> Consider a 2-minute delay to be a complete failure, because it is.
> Anything more than a few seconds is not useful. One reasonable (?)
> hack would be to limit that delay to not more than double the
> longest recorded round trip.
What about going from exponential to a linearly increasing
timeout? Ie. timeouts of 1, 2, 3, 4 etc... Since most
'timeouts' will be because of dropped packets, and since
a lot of links are a _lot_ of hops, this algorithm might
approach reality a lot better. The upper bound should probably
be at 4 or 8 times RTT.
Rik.
+-------------------------------------------------------------------+
| Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/faq.html