Yep. People don't need to understand the guts of TCP to know that they
get an initial burst of transfers and then things can slow down.
> New connections means _more_ network traffic, not less. It sure
> seems that exponential backoff was designed for local 10base2
> with no collision detection. Perhaps a greedy algorithm would
> make more sense on the Internet.
>
> In any case, delays beyond 2 seconds make congestion worse.
> Since real users will restart such connections, it is very bad
> to let the backoff grow so much. Never mind the pure theory.
> We have _humans_ involved, and they don't wait 2 minutes.
I couldn't agree more. Whatever the network theorems, they have to
take account of user behaviour. In the "good old days" much Internet
traffic was batch/bulk transfer (Usenet, email, ftp). With more of the
traffic being http, the actions of humans needs to be considered in
the theorems.
Now, if the routers could tell applications what the maximum bandwidth
currently available on the next hop was, things might be
better. Hmmm. Sounds like ATM. Heresy ;-)
Regards,
Richard....
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/faq.html