Re: PPPOE Was (Re: >=pre5 OOPS on boot failure to open /dev/console

From: jamal (hadi@cyberus.ca)
Date: Wed Apr 19 2000 - 20:25:56 EST


On Tue, 18 Apr 2000, Henner Eisen wrote:

> call/connection setup will be done by protocol's standard mechanisms.
> e.g. a user space process will do a connect() or accept() on a socket
> and then do a PPPIOCATTCH ioctl in order to attach the data path to
> a ppp channel (or somthing similar in order to attach it to a tunnel device).
>

Ok. So same thing as in pppox.

> I'm not sure what you mean by 'flow control', but it seems that we
> have different things in mind when talking about flow control. Of course,

Indeed we do.

> the end user's process, which has e.g. an open tcp connection which just
> happens to be routed over the ppp connection, will be flow controlled by means
> of the standard kernels mechanisms (the ppp / tunnel layer is not even
> aware of this).
>
> What I was thinking about was the low (device)-layer flow, which
> is controlled by netif_{start,stop,wake}_queue() for linux
> network devices or ppp_output_wakeup() for generic ppp_channels.

Ok so now i understand you ;->

> E.g. X.25 (the same holds for most connection oriented sockets) uses
> a sliding window mechanism. If the send window is full, then we are
> not allowed to send further frames to the peer. Thus, we should do
> a netif_stop_queue() for a network device tunnel interface or return 'busy'
> from our ppp_channel's ppp_start_xmit() method. And likewise, we want
> to do a netif_wake_queue() or a ppp_output_wakeup when there is space
> in the send window again. It's that kind of flow control which I want to
> support.
>
> Of course we could also just discard any tx packet while the send window
> is full. But this will likly result in worse performance. It's
> probably better to flow control the upper (net_device tunnel or
> ppp_generic channel), because those upper layers can be much smarter
> about what to do with the packet which we temporarily cannot accept for
> transmission.
>

I wonder if Alexey is reading this ;->
(ok there, he is cc'ed now ;-> )
So how are you sending the feedback all the way to the transport protocol?
Say, TCP where it might be really useful to distinguish between local
congestion vs "somewhere along the end2end path" congestion;
I havent looked at your code but i suspect you are using the
NET_XMIT_* codes as the source of your information about the local
congestion. And the big question is: so what to do when you get this
information? And if you were to throttle, for how long? etc etc...
I think it is a good idea, but you need to be bullet proof ...

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Apr 23 2000 - 21:00:16 EST