MPP, MP+, Bonding (was Re: cli() / sti())

Doug Ledford (
Thu, 19 Jun 1997 23:57:43 -0500

> "Russell Coker - mailing lists account" writes:
> > For those who are interested I'm trying to make the eql driver handle
> > multiple eql devices. A single eql device is not enough for an ISP server,
> > I plan to raise the limit to 256.
> Speaking of EQL, I just looked through the PPP source and noticed there
> are no refrences to MPP, MP+ and BONDING... now that most ISPs have
> equipment that supports multiple modems it would be a good idea to hook
> it up. AFAIK, it just adds a MP tag to the endpoint of a connection...
> anyone have more specific documentation?

The standard PPP driver only works on tty devices, the modified SyncPPP
driver for ISDN only works on ISDN devices. The ISDN version includes MPP
support. Personally, I think it would be best if the two branches were
merged such that the regular PPP driver included the support needed for
SyncPPP, MP, Bonding, etc. and that the current ISDN devices presented the
kernel with tty type devices. This then creates a second issue. The
current ppp->tty layer interface is not very well suited to MP and Bonding.
The reason for this is the buffering of data in the tty layer where the PPP
driver (and consequently the EQL driver or any other bonding driver) doesn't
know how much data is actually buffered at any point in time. The interface
needs to be modified so that the PPP driver knows how much of the data it
has sent to the tty layer is still in buffers and how much has actually gone
out the pipe, and the EQL driver needs to be modified to access this
information when determining what pipe to send through. Things of this
nature are needed in order to compensate for the fact that analog modem
bonding is possible, and that as such, different analog lines can have
different actual transfer rates as well as different packets can result in
different compression ratios and actual transmission times. The current EQL
driver method of scheduling is mediocre at best for analog modem lines
(although it should work fairly well for non-compressed ISDN channels).

Additionally, there would need to be some method of inter-device
communication such that as one MP channel comes up for a particular
connection (or down for that matter) it can tie itself to an already
existing line and add itself to the slave sequence. The bringing down of
the interface would then need to make sure that if the primary (original)
line goes down that determined the startup IP address (assuming dynamic
address assignment on the server side) that the address would not be re-used
prematurely. At least for a server side implementation, there are several
significant obstacles to be overcome (I mention this since ISP use has
already been mentioned).

Let's take a scenario real quick from a server side point of view:

Someone calls in using an Ascend Pipeline 50. They only call with one
channel of an ISDN BRI. This first connection then can be assigned whatever
address it gets out of the pool (most linux based server side
implementations assign this based on the tty they called in on). At some
point in time, the line becomes congested, so the Ascend activates the
second channel with an MP call into the server. In this case, the answering
pppd needs to know that this is an MP call instead of a regular PPP call,
needs to ignore the typical tty assigned IP address, find the original PPP
channel that was called in on, and configure itself to match the original
call in channel. There is also the possibility that for sequencing reasons,
the first PPP channel may need to have VJ header compression disabled on the
fly, as well as the second channel truning it off from the very beginning.
Then, as the second channel comes up under IPCP, it needs to add itself as a
slave either to an MP device (in which case the first channel would also
need to add itself to this master device) or as a slave to the original PPP
device and allow that PPP device to handle load balancing. If we add to the
original PPP device, then at least the routing table won't need updated to
change the route from being through the original PPP device to the new MP
device handling load balancing. Now, what if line two is still connected
and line one disconnects (unlikely with an Ascend on ISDN, but very common
with analog lines and modems)? We suddenly have lost the channel that
provided our original IP address assignment, as well as our master device if
the PPP protocol is doing its own scheduling, or one of the slaves if a
separate MP device is doing it's scheduling. Having lost this device, it is
now entirely possible that some other user could call in on the line
normally assigned the IP address we are still using. Without some mechanism
to do real IP address pooling, this can be a problem. Current solutions
include the notion that every MP capable user is assigned a static IP to
override the dynamic assignment, but this is ugly. If you are using an MP
device as the master device for routing purposes, then it would be entirely
possible to end up with a single PPP device and an MP device + slaves
configured for the same IP address and fighting over routing rights. Ugly.

What we really need here I think is to "pare down" the pppd daemon to just
the essential elements to bring the initial line up, then have a central
mpppd daemon that runs as any other server process that handles arbitration
between various pppd instances. It could also handle the job of notifying
an existing PPP device that it is being switched to MP as a new call has
arrived (or, if it already knows, then it can handle setting up a master
device and adding both PPP devices as slaves to the master device).
Additionally, each pppd process can register that it holds any particular IP
address, and not until all instances of that address are released will the
daemon allow it to be re-used. In this case, the daemon itself would dole
out the IP address assignments for lines out of a pool of addresses. It
should also be fairly simple to allow the central daemon to be smart enough
to know about users with static IPs and users with small subnets across
their link and handle all routing table updates when the links come up for

That's the user land side of the work. MPP, MP+, and BONDING support (at
least in the part of recognizing the needed control messages) would need to
be added to the PPP driver as well as the pppd daemon. The PPP driver would
need to be modified so that it kept track of bytes queued down various
devices and the queue counting mechanism should be smart enough to know if
the tty layer has actually transmitted data given to it, or if the data is
setting in a buffer somewhere waiting on flow control to change. This
information should be readily accessible to the load balancing code. The
load balancing code should not be pulling tricks such as once a second
subtracting the estimated bytes per second transfer value from the
bytes_queued value. This type of balancing was all that was available at
the time, but to be done right, it must be gotten rid of and the PPP code
must be proplerly modified (the current method results in horrible stalls
whenever a modem is in the middle of a retrain sequence or even just a train
up/train down sequence).

* Doug Ledford                      *   Unix, Novell, Dos, Windows 3.x,     *
*    873-DIAL  *     WfW, Windows 95 & NT Technician   *
*   PPP access $14.95/month         *****************************************
*   Springfield, MO and surrounding * Usenet news, e-mail and shell account.*
*   communities.  Sign-up online at * Web page creation and hosting, other  *
*   873-9000 V.34                   * services available, call for info.    *