Re: [patch 0/3] j_state_lock, j_list_lock, remove-bitlocks

From: Steven Rostedt
Date: Wed Mar 16 2005 - 12:49:14 EST




On Wed, 16 Mar 2005, Steven Rostedt wrote:

>
> Hi Ingo,
>
> I just ran this with PREEMPT_RT and it works fine.

Not quite, and I will assume that some of the other patches I sent have
this same problem. The jbd_trylock_bh_state really scares me. It seems
that in fs/jbd/commit.c in journal_commit_transaction we have the
following code:


write_out_data:
cond_resched();
spin_lock(&journal->j_list_lock);

while (commit_transaction->t_sync_datalist) {
struct buffer_head *bh;

jh = commit_transaction->t_sync_datalist;
commit_transaction->t_sync_datalist = jh->b_tnext;
bh = jh2bh(jh);
if (buffer_locked(bh)) {
BUFFER_TRACE(bh, "locked");
if (!inverted_lock(journal, bh))
goto write_out_data;


where invert_data simply is:


/*
* Try to acquire jbd_lock_bh_state() against the buffer, when j_list_lock
is
* held. For ranking reasons we must trylock. If we lose, schedule away
and
* return 0. j_list_lock is dropped in this case.
*/
static int inverted_lock(journal_t *journal, struct buffer_head *bh)
{
if (!jbd_trylock_bh_state(bh)) {
spin_unlock(&journal->j_list_lock);
schedule();
return 0;
}
return 1;
}


So, with kjournal running as a FIFO, it may hit this (as it did with my
last test) and not get the lock. All it does is release another lock
(ranking reasons) and calls schedule and tries again. With kjournal the
highest running process on the system (UP) it deadlocks since whoever has
the lock will never get a chance to run. There's a couple of places that
jbd_trylock_bh_state is used in checkpoint.c, but this is the one place
that it definitely deadlocks the system. I believe that the
code in checkpoint.c also has this problem.

I guess one way to solve this is to add a wait queue here (before
schedule()), and have the one holding the lock to wake up all on the
waitqueue when they release it.

-- Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/