Re: kernel thread support - LWP's

Richard Gooch (rgooch@atnf.csiro.au)
Fri, 16 Jul 1999 09:18:13 +1000


[Looks like you guys had fun while the .au feed was busted;-)]

Benjamin LaHaise writes:
> On Thu, 15 Jul 1999, Nate Tuck wrote:
>
> > Since you've obviously talked to a lot of good people on this, I was
> > wondering if you could talk about the only issue I haven't heard you bring
> > up which is frequently brought up by the LWP/user-thread-scheduler folks.
> > What about kernel run-queue length? It seems that I've heard the argument
> > made that LWP's keep you from spending a long time in the kernel scheduler,
> > which I could see might actually be a good thing.
>
> If run queue length is an issue, then you're using too many threads.
> Under what reasonable circumstances will you have more than a couple
> of threads ready for scheduling? I see threads helping in two ways:
> providing a means of concurency during blocking io (read: disk
> access), and utilising SMP. Using an event driven programming model
> should be able to get peak performance with a number of threads
> equal to the number of cpus + a few spares for blocking io. The
> only time there should be more active threads than can be run is
> when an io request has just completed, and that should trigger only
> a short lived response. Does that make sense?

It used to. I've recently come back from CALU, where DaveM talked
about TCP/SMP scaling. Very interesting talk (basically a
high-bandwidth brain dump: fun trying to keep up).

Dave is looking at moving most of packet processing to the process
context. This means that (assuming I grok it) you do enough processing
in the interrupt handler for the NIC to work out which socket the
packet belongs to, then dump the packet onto the socket backlog queue
and wake the process. It's then the process that does all that TCP
processing. This has major benefits for scalability.

The model that we may then end up with is that you have one thread per
connection, blocking, and the kernel multiplexes the packets, rather
than Ncpu + Nspare threads and user space multiplexes the packets. In
which case the argument for pure kernel-based threads is stronger, as
you don't want the extra overheads for supporting user-mode context
switches.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/