Re: [PATCH 34/40] lib: code tagging context capture support

From: Suren Baghdasaryan
Date: Wed May 03 2023 - 11:19:04 EST


On Wed, May 3, 2023 at 12:36 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote:
> [...]
> > +static inline void add_ctx(struct codetag_ctx *ctx,
> > + struct codetag_with_ctx *ctc)
> > +{
> > + kref_init(&ctx->refcount);
> > + spin_lock(&ctc->ctx_lock);
> > + ctx->flags = CTC_FLAG_CTX_PTR;
> > + ctx->ctc = ctc;
> > + list_add_tail(&ctx->node, &ctc->ctx_head);
> > + spin_unlock(&ctc->ctx_lock);
>
> AFAIU every single tracked allocation will get its own codetag_ctx.
> There is no aggregation per allocation site or anything else. This looks
> like a scalability and a memory overhead red flag to me.

True. The allocations here would not be limited. We could introduce a
global limit to the amount of memory that we can use to store contexts
and maybe reuse the oldest entry (in LRU fashion) when we hit that
limit?

>
> > +}
> > +
> > +static inline void rem_ctx(struct codetag_ctx *ctx,
> > + void (*free_ctx)(struct kref *refcount))
> > +{
> > + struct codetag_with_ctx *ctc = ctx->ctc;
> > +
> > + spin_lock(&ctc->ctx_lock);
>
> This could deadlock when allocator is called from the IRQ context.

I see. spin_lock_irqsave() then?

Thanks for the feedback!
Suren.

>
> > + /* ctx might have been removed while we were using it */
> > + if (!list_empty(&ctx->node))
> > + list_del_init(&ctx->node);
> > + spin_unlock(&ctc->ctx_lock);
> > + kref_put(&ctx->refcount, free_ctx);
> --
> Michal Hocko
> SUSE Labs