Re: [RFC] Per-user namespace process accounting

From: Pavel Emelyanov
Date: Tue Jun 03 2014 - 13:02:28 EST


On 05/29/2014 07:32 PM, Serge Hallyn wrote:
> Quoting Marian Marinov (mm@xxxxxx):
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
>>> Marian Marinov <mm@xxxxxx> writes:
>>>
>>>> Hello,
>>>>
>>>> I have the following proposition.
>>>>
>>>> Number of currently running processes is accounted at the root user namespace. The problem I'm facing is that
>>>> multiple containers in different user namespaces share the process counters.
>>>
>>> That is deliberate.
>>
>> And I understand that very well ;)
>>
>>>
>>>> So if containerX runs 100 with UID 99, containerY should have NPROC limit of above 100 in order to execute any
>>>> processes with ist own UID 99.
>>>>
>>>> I know that some of you will tell me that I should not provision all of my containers with the same UID/GID maps,
>>>> but this brings another problem.
>>>>
>>>> We are provisioning the containers from a template. The template has a lot of files 500k and more. And chowning
>>>> these causes a lot of I/O and also slows down provisioning considerably.
>>>>
>>>> The other problem is that when we migrate one container from one host machine to another the IDs may be already
>>>> in use on the new machine and we need to chown all the files again.
>>>
>>> You should have the same uid allocations for all machines in your fleet as much as possible. That has been true
>>> ever since NFS was invented and is not new here. You can avoid the cost of chowning if you untar your files inside
>>> of your user namespace. You can have different maps per machine if you are crazy enough to do that. You can even
>>> have shared uids that you use to share files between containers as long as none of those files is setuid. And map
>>> those shared files to some kind of nobody user in your user namespace.
>>
>> We are not using NFS. We are using a shared block storage that offers us snapshots. So provisioning new containers is
>> extremely cheep and fast. Comparing that with untar is comparing a race car with Smart. Yes it can be done and no, I
>> do not believe we should go backwards.
>>
>> We do not share filesystems between containers, we offer them block devices.
>
> Yes, this is a real nuisance for openstack style deployments.
>
> One nice solution to this imo would be a very thin stackable filesystem
> which does uid shifting, or, better yet, a non-stackable way of shifting
> uids at mount.

I vote for non-stackable way too. Maybe on generic VFS level so that filesystems
don't bother with it. From what I've seen, even simple stacking is quite a challenge.

Thanks,
Pavel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/