Re: [PATCH] fs/ceph/super: add mount options "snapdir{mode,uid,gid}"

From: Xiubo Li
Date: Wed Oct 19 2022 - 21:32:21 EST



On 11/10/2022 18:45, Jeff Layton wrote:
On Mon, 2022-10-10 at 10:02 +0800, Xiubo Li wrote:
On 09/10/2022 18:27, Max Kellermann wrote:
On Sun, Oct 9, 2022 at 10:43 AM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
I mean CEPHFS CLIENT CAPABILITIES [1].
I know that, but that's suitable for me. This is client-specific, not
user (uid/gid) specific.

In my use case, a server can run unprivileged user processes which
should not be able create snapshots for their own home directory, and
ideally they should not even be able to traverse into the ".snap"
directory and access the snapshots created of their home directory.
Other (non-superuser) system processes however should be able to
manage snapshots. It should be possible to bind-mount snapshots into
the user's mount namespace.

All of that is possible with my patch, but impossible with your
suggestion. The client-specific approach is all-or-nothing (unless I
miss something vital).

The snapdir name is a different case.
But this is only about the snapdir. The snapdir does not exist on the
server, it is synthesized on the client (in the Linux kernel cephfs
code).
This could be applied to it's parent dir instead as one metadata in mds
side and in client side it will be transfer to snapdir's metadata, just
like what the snapshots.

But just ignore this approach.

But your current approach will introduce issues when an UID/GID is reused after an user/groud is deleted ?
The UID I would specify is one which exists on the client, for a
dedicated system user whose purpose is to manage cephfs snapshots of
all users. The UID is created when the machine is installed, and is
never deleted.
This is an ideal use case IMO.

I googled about reusing the UID/GID issues and found someone has hit a
similar issue in their use case.

This is always a danger and not just with ceph. The solution to that is
good sysadmin practices (i.e. don't reuse uid/gid values without
sanitizing the filesystems first).

Yeah, this sounds reasonable.

Maybe the proper approach is the posix acl. Then by default the .snap dir will inherit the permission from its parent and you can change it as you wish. This permission could be spread to all the other clients too ?
No, that would be impractical and unreliable.
Impractical because it would require me to walk the whole filesystem
tree and let the kernel synthesize the snapdir inode for all
directories and change its ACL;
No, it don't have to. This could work simply as the snaprealm hierarchy
thing in kceph.

Only the up top directory need to record the ACL and all the descendants
will point and use it if they don't have their own ACLs.

impractical because walking millions
of directories takes longer than I am willing to wait.
Unreliable because there would be race problems when another client
(or even the local client) creates a new directory. Until my local
"snapdir ACL daemon" learns about the existence of the new directory
and is able to update its ACL, the user can already have messed with
it.
For multiple clients case I think the cephfs capabilities [3] could
guarantee the consistency of this. While for the single client case if
before the user could update its ACL just after creating it someone else
has changed it or messed it up, then won't the existing ACLs have the
same issue ?

[3] https://docs.ceph.com/en/quincy/cephfs/capabilities/


Both of that is not a problem with my patch.

Jeff,

Any idea ?

I tend to agree with Max here. The .snap dir is a client-side fiction,
so trying to do something on the MDS to govern its use seems a bit odd.
cephx is really about authenticating clients. I know we do things like
enforce root squashing on the MDS, but this is a little different.

Now, all of that said, snapshot handling is an area where I'm just not
that knowledgeable. Feel free to ignore my opinion here as uninformed.

I am thinking currently the cephfs have the same issue we discussed here. Because the cephfs is saving the UID/GID number in the CInode metedata. While when there have multiple clients are sharing the same cephfs, so in different client nodes another user could cross access a specified user's files. For example:

In client nodeA:

user1's UID is 123, user2's UID is 321.

In client nodeB:

user1's UID is 321, user2's UID is 123.

And if user1 create a fileA in the client nodeA, then user2 could access it from client nodeB.

Doesn't this also sound more like a client-side fiction ?

- Xiubo