Re: Asynch I/O overloaded 2.2.15/2.3.99

From: Jeff V. Merkey (jmerkey@timpanogas.com)
Date: Tue Apr 11 2000 - 18:17:03 EST


Disregard. I found the problem for this Ooops. Stack corruption.
Fixed !!! Sorry for bothering folks. I will post all the full profile
numbers after this Ooops fix is in and I can run all the tests again.

Jeff

"Jeff V. Merkey" wrote:
>
> The Oops I am seeing on 2.2.15 while doing pure Linux AIO is:
>
> Ooops: 0002
> CPU: 0
> EIP: 0010:[<8011358a>]
>
> Process swapper (pid: 0 process nr: 1, stackpage=83ffbe24)
>
> Aiee Kiling interrupt handler
> Kernel panic: Attempted to kill the idle task!
> In interrupt handler - not syncing.
>
> Jeff
>
> "Jeff V. Merkey" wrote:
> >
> > I tries runs of 500 buffers, 1000 buffers, 2000 buffers, 3000 buffers,
> > and 4000 buffers.
> >
> > And the winners are!
> >
> > 1. ll_rw_blk (and add_request/make_request) (oink, oink..... oink,
> > oink ... rooting around down in the hardware -- I think it's looking for
> > truffles)
> >
> > and a close second:
> >
> > 2. ide_do_request
> > ide_delay_50ms (huge!!!)
> > ide_ioctl
> >
> > I am also seeing an occasional Ooops when using **FULL** linux AIO (I
> > call ll_rw_blk and don't call wait on buffer, instead signal the NWFS
> > LRU directly from the b_end_io callback routine). Also, do I have to
> > call run_task_queue(&tq_disk) call all the time to get the AIO subsystem
> > to post the buffer-head request. If I don't call it, then the AIO's
> > just sit there and get processed about one every 10 seconds (???).
> >
> > Jeff
> >
> > Andi Kleen wrote:
> > >
> > > On Tue, Apr 11, 2000 at 05:05:39PM +0200, Andrea Arcangeli wrote:
> > > > >The main problem with the dentry and inode hashes is that they waste twice
> > > > >the memory they should (anchored list heads are *very* wasteful for hash
> > > > >tables). It is rather unlikely that the cost of the few saved if()s in the
> > > > >list macros weight up for the additional cache misses [patch will come
> > > > >as soon as I have tested it a bit more]
> > > >
> > > > You are perfectly right, but I really don't think it's for the speed but
> > > > for being allowed to use the list.h interface (at least that's why I used
> > > > the anchored list head in the write-COW fast path lookup of the
> > > > lvm-snapshot code even if I known I wasn't optimizing memory usage ;). I
> > > > strongly disagree in adding further by hand stuff like what we have just
> > > > for the buffer and page cache hashtables. We do need the same interface of
> > > > list.h _but_ that uses a single word sized head.
> > >
> > > Yes, I did a hlist[1] for this purpose. It does not have the 100% same interface
> > > because the semantics are a bit different.
> > >
> > > -Andi
> > >
> > > [1] suggestions for better names are welcome
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.rutgers.edu
> > > Please read the FAQ at http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Apr 15 2000 - 21:00:17 EST