Re: [PATCH bpf-next 1/5] bpf: Implement task local storage

From: KP Singh
Date: Fri Oct 30 2020 - 07:07:45 EST


"

On Fri, Oct 30, 2020 at 12:28 AM Song Liu <song@xxxxxxxxxx> wrote:
>
> On Wed, Oct 28, 2020 at 9:17 AM KP Singh <kpsingh@xxxxxxxxxxxx> wrote:
> >
> > From: KP Singh <kpsingh@xxxxxxxxxx>
> >
> > Similar to bpf_local_storage for sockets and inodes add local storage
> > for task_struct.
> >
> > The life-cycle of storage is managed with the life-cycle of the
> > task_struct. i.e. the storage is destroyed along with the owning task
> > with a callback to the bpf_task_storage_free from the task_free LSM
> > hook.
>
> It looks like task local storage is tightly coupled to LSM. As we discussed,
> it will be great to use task local storage in tracing programs. Would you
> like to enable it from the beginning? Alternatively, I guess we can also do
> follow-up patches.
>

I would prefer if we do it in follow-up patches.

> >
> > The BPF LSM allocates an __rcu pointer to the bpf_local_storage in
> > the security blob which are now stackable and can co-exist with other
> > LSMs.
> >
> > The userspace map operations can be done by using a pid fd as a key
> > passed to the lookup, update and delete operations.
>
> While testing task local storage, I noticed a limitation of pid fd:
>
> /* Currently, the process identified by
> * @pid must be a thread-group leader. This restriction currently exists
> * for all aspects of pidfds including pidfd creation (CLONE_PIDFD cannot
> * be used with CLONE_THREAD) and pidfd polling (only supports thread group
> * leaders).
> */
>
> This could be a problem for some use cases. How about we try to remove
> this restriction (maybe with a new flag to pidfd_open) as part of this set?

I would appreciate it if we could also do this in a follow-up patch.

I do see that there is a comment in fork.c:

"CLONE_THREAD is blocked until someone really needs it."

But I don't understand the requirements well enough and would thus prefer
to do this in a follow-up series.

- KP

>
> Thanks,
> Song
>
> [...]