Re: [PATCH 2/2] bcache: Convert to lock_cmp_fn

From: Kent Overstreet
Date: Wed May 10 2023 - 13:06:18 EST


On Wed, May 10, 2023 at 03:01:51PM +0200, Peter Zijlstra wrote:
> On Tue, May 09, 2023 at 03:58:47PM -0400, Kent Overstreet wrote:
> > Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx>
> > Cc: Coly Li <colyli@xxxxxxx>
> > ---
> > drivers/md/bcache/btree.c | 23 ++++++++++++++++++++++-
> > drivers/md/bcache/btree.h | 4 ++--
> > 2 files changed, 24 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> > index 147c493a98..569f48958b 100644
> > --- a/drivers/md/bcache/btree.c
> > +++ b/drivers/md/bcache/btree.c
> > @@ -559,6 +559,27 @@ static void mca_data_alloc(struct btree *b, struct bkey *k, gfp_t gfp)
> > }
> > }
> >
> > +#define cmp_int(l, r) ((l > r) - (l < r))
> > +
> > +#ifdef CONFIG_PROVE_LOCKING
> > +static int btree_lock_cmp_fn(const struct lockdep_map *_a,
> > + const struct lockdep_map *_b)
> > +{
> > + const struct btree *a = container_of(_a, struct btree, lock.dep_map);
> > + const struct btree *b = container_of(_b, struct btree, lock.dep_map);
> > +
> > + return -cmp_int(a->level, b->level) ?: bkey_cmp(&a->key, &b->key);
> > +}
> > +
> > +static void btree_lock_print_fn(const struct lockdep_map *map)
> > +{
> > + const struct btree *b = container_of(map, struct btree, lock.dep_map);
> > +
> > + printk(KERN_CONT " l=%u %llu:%llu", b->level,
> > + KEY_INODE(&b->key), KEY_OFFSET(&b->key));
> > +}
> > +#endif
> > +
> > static struct btree *mca_bucket_alloc(struct cache_set *c,
> > struct bkey *k, gfp_t gfp)
> > {
> > @@ -572,7 +593,7 @@ static struct btree *mca_bucket_alloc(struct cache_set *c,
> > return NULL;
> >
> > init_rwsem(&b->lock);
> > - lockdep_set_novalidate_class(&b->lock);
> > + lock_set_cmp_fn(&b->lock, btree_lock_cmp_fn, btree_lock_print_fn);
> > mutex_init(&b->write_lock);
> > lockdep_set_novalidate_class(&b->write_lock);
>
> I can't help but notice you've got yet another novalidate_class usage
> here. What does it take to get rid of that?

this is a tricky one, because the correct lock ordering refers to
particular locks of different types; we take b->lock before
b->write_lock, for a given btree node.

And like b->lock, b->write_lock can be held simultaneously for multiple
nodes, with the same ordering that btree_lock_cmp_fn() defines.

Conceptually we'd need a lock_cmp_fn that can compare locks of different
types...

This patchset might be almost enough to do that, I'll give it a bit more
thought.