Re: [PATCH v8 07/11] LSM: Helpers for attribute names and filling an lsm_ctx

From: Casey Schaufler
Date: Tue Apr 18 2023 - 18:43:26 EST


On 4/18/2023 2:51 PM, Paul Moore wrote:
> On Tue, Apr 11, 2023 at 12:02 PM Casey Schaufler <casey@xxxxxxxxxxxxxxxx> wrote:
>> Add lsm_name_to_attr(), which translates a text string to a
>> LSM_ATTR value if one is available.
>>
>> Add lsm_fill_user_ctx(), which fills a struct lsm_ctx, including
>> the trailing attribute value. The .len value is padded to a multiple
>> of the size of the structure for alignment.
>>
>> All are used in module specific components of LSM system calls.
>>
>> Signed-off-by: Casey Schaufler <casey@xxxxxxxxxxxxxxxx>
>> ---
>> include/linux/security.h | 13 +++++++++++
>> security/lsm_syscalls.c | 24 ++++++++++++++++++++
>> security/security.c | 48 ++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 85 insertions(+)
> ..
>
>> diff --git a/security/lsm_syscalls.c b/security/lsm_syscalls.c
>> index 6efbe244d304..67106f642422 100644
>> --- a/security/lsm_syscalls.c
>> +++ b/security/lsm_syscalls.c
>> @@ -17,6 +17,30 @@
>> #include <linux/lsm_hooks.h>
>> #include <uapi/linux/lsm.h>
>>
>> +/**
>> + * lsm_name_to_attr - map an LSM attribute name to its ID
>> + * @name: name of the attribute
>> + *
>> + * Returns the LSM attribute value associated with @name, or 0 if
>> + * there is no mapping.
>> + */
>> +u64 lsm_name_to_attr(const char *name)
>> +{
>> + if (!strcmp(name, "current"))
>> + return LSM_ATTR_CURRENT;
>> + if (!strcmp(name, "exec"))
>> + return LSM_ATTR_EXEC;
>> + if (!strcmp(name, "fscreate"))
>> + return LSM_ATTR_FSCREATE;
>> + if (!strcmp(name, "keycreate"))
>> + return LSM_ATTR_KEYCREATE;
>> + if (!strcmp(name, "prev"))
>> + return LSM_ATTR_PREV;
>> + if (!strcmp(name, "sockcreate"))
>> + return LSM_ATTR_SOCKCREATE;
>> + return 0;
>> +}
> Thank you :)

It didn't hurt all that badly.

>
>> /**
>> * sys_lsm_set_self_attr - Set current task's security module attribute
>> * @attr: which attribute to set
>> diff --git a/security/security.c b/security/security.c
>> index bfe9a1a426b2..453f3ff591ec 100644
>> --- a/security/security.c
>> +++ b/security/security.c
>> @@ -752,6 +752,54 @@ static int lsm_superblock_alloc(struct super_block *sb)
>> return 0;
>> }
>>
>> +/**
>> + * lsm_fill_user_ctx - Fill a user space lsm_ctx structure
>> + * @ctx: an LSM context to be filled
>> + * @context: the new context value
>> + * @context_size: the size of the new context value
>> + * @id: LSM id
>> + * @flags: LSM defined flags
>> + *
>> + * Fill all of the fields in a user space lsm_ctx structure.
>> + * Caller is assumed to have verified that @ctx has enough space
>> + * for @context.
>> + *
>> + * The total length is padded to an integral number of lsm_ctx.
> Considering that lsm_ctx is variable length I'm not sure that makes a
> lot of sense, how about we pad the total length so that the @ctx entry
> is a multiple of 64-bits?

64 is fine.

> If needed we can always change this later
> as the lsm_ctx struct is inherently variable in length and userspace
> will need to deal with the buffer regardless of alignment.
>
>> + * Returns 0 on success, -EFAULT on a copyout error.
>> + */
>> +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context,
>> + size_t context_size, u64 id, u64 flags)
>> +{
>> + struct lsm_ctx *lctx;
>> + size_t locallen;
>> + u8 *composite;
>> + int rc = 0;
>> +
>> + locallen = sizeof(*ctx);
>> + if (context_size)
>> + locallen += sizeof(*ctx) * ((context_size / sizeof(*ctx)) + 1);
> It seems cleaner to use the kernel's ALIGN() macro:

Indeed. I'll do it.

>
> /* ensure the lsm_ctx length is a multiple of 64-bits */
> locallen = ALIGN(sizeof(*ctx) + context_size, 8);
> lctx = kzalloc(locallen, GFP_KERNEL)
> if (!lctx)
> return -ENOMEM;
>
>> + composite = kzalloc(locallen, GFP_KERNEL);
>> + if (composite == NULL)
>> + return -ENOMEM;
>> +
>> + lctx = (struct lsm_ctx *)composite;
>> + lctx->id = id;
>> + lctx->flags = flags;
>> + lctx->ctx_len = context_size;
>> + lctx->len = locallen;
>> +
>> + memcpy(composite + sizeof(*lctx), context, context_size);
> Is there a problem with doing `memcpy(lctx->ctx, context,
> context_size)` in place of the memcpy above?

Nope.

> That is easier to read
> and we can get rid of @composite.

Point.

>> + if (copy_to_user(ctx, composite, locallen))
>> + rc = -EFAULT;
>> +
>> + kfree(composite);
>> +
>> + return rc;
>> +}
> I understand Mickaël asked you to do a single copy_to_user(), but I'm
> not sure it is worth it if we have to add a temporary buffer
> allocation like that. How about something like below (v7 with some
> tweaks/padding)? You could be a bit more clever with the memset if
> you want, I was just typing this up quickly ...

I prefer two copies to the allocation myself. I'll incorporate this.

>
> int lsm_fill_user_ctx(...)
> {
> struct lsm_ctx lctx;
>
> /* ensure the lctx length is a multiple of 64-bits */
> lctx.len = ALIGN(sizeof(lctx) + context_size, 8);
>
> lctx.id = id;
> lctx.flags = flags;
> lctx.ctx_len = context_size;
>
> memset(ctx, 0, lctx.len);
> if (copy_to_user(ctx, &lctx, sizeof(lctx))
> return -EFAULT;
> if (copy_to_user(&ctx[1], context, context_size)
> return -EFAULT;
>
> return 0;
> }
>
> --
> paul-moore.com