Re: More PCI races...

From: George Anzinger (
Date: Mon Jun 19 2000 - 16:56:04 EST

David Hinds wrote:
> On Fri, Jun 16, 2000 at 05:45:33PM -0700, George Anzinger wrote:
> > > That's not right at all. The scheduler certainly checks for bottom
> > > half stuff, just like it checks for expired timers. However, neither
> > > timers nor bottom halves run in process context. Neither may sleep.
> > Guess it depends on what you mean by process context. What I was saying
> > is that they don't run in interrupt context, i.e. they are called from
> > schedule.
> The fact that they are called from schedule() is not diagnostic. They
> are not running in process context. "interrupt context" is perhaps
> misleading: call it the "you can't schedule because there is no active
> process" context if you like.
I think we agree on the state one is in. The name is another thing. A
test for the current task, (either of the two tests) will come up with
an arbitrary task. This is true on most systems. The only thing making
this state different is that the bottom half count is bumped and this is
done on the way in.

The orgional discussion here had to do with the notion of calling bottom
half code from a task that could run at a lower priority, thus, for
example, defering lan traffic handling in favor of something more
important, such as controlling the reactor or what ever. May be we
could point the discussion toward exploring that issue. For example,
why not call bottom halfs from a task. For argument sake, lets assume
that we have divided up the bottom half tasks so that we know what we
are defering and why.

For example, the orgional discussion was around timers. I submit that
there are a wide range of timers and timer uses, but that all have
fairly clearly defined priorityies as derived from the task that started
the timer. We could also note that the timer resolution is so low (1 or
10ms) that defering the timer pop, especially if the task that is to
take advantage of it is of low priority, is no big deal. On a system I
worked on in the past, the timer interupt just put the time structure on
a list (FILO) of expired timers. This list was serviced by a kernel
thread that _only_ did expired timer clean up. The task priority was
adjusted to track the highest priority timer in the list. Thus, if the
timer were for the lan keep alive, for example, and lan traffic was low
on the priority list, the thread would run at a priority such as to keep
it out of the way of more important things.

For this discussion, we could explore the problems with such a thread,
etc. In the timer case above, for example, we would want to impose all
the same restrictions as the bottom half code runs under, because it
needs to service all the timers. On the other hand, a thread that only
handled bottom half issues for one interface, might be able to relax
these restrictions. The key, I think, is knowing why the restrictions
are in place.

> > They do run on the current tasks stack and they are called by
> > the scheduler with the interrupt system on. If they were to sleep, I
> > expect that the current process would be the one to sleep.
> No, you are simply wrong. Try it.

Right, it panics in schedule.
> > Granted this
> > is a _BIG NO-NO_, but I don't think there is a check in place to stop
> > it. So, how do we define process context or interrupt context?
> See the definition of in_interrupt(). Note that it checks to see if
> either a real interrupt handler or a bottom half is running on this
> CPU. Notice that schedule() panics if in_interrupt() is true.
> -- Dave

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at

This archive was generated by hypermail 2b29 : Fri Jun 23 2000 - 21:00:18 EST