Re: [GIT PULL] kdbus for 4.1-rc1

From: Andy Lutomirski
Date: Thu Apr 16 2015 - 10:35:28 EST


On Thu, Apr 16, 2015 at 6:13 AM, Tom Gundersen <teg@xxxxxxx> wrote:
> On 04/15/2015 10:22 PM, Andy Lutomirski wrote:
>> On Wed, Apr 15, 2015 at 9:44 AM, Havoc Pennington <hp@xxxxxxxxx> wrote:
>>> That is, with dbus if I send a broadcast message, then send a unicast
>>> request to another client, then drop the connection causing the bus to
>>> broadcast that I've dropped; then the other client will see those
>>> things in that order - the broadcast, then the request, and then that
>>> I've dropped the connection.
>>
>> This leads me to a potentially interesting question: where's the
>> buffering? If there's a bus with lots of untrusted clients and one of
>> them broadcasts data faster than all receivers can process it, where
>> does it go?
>
> The concepts implemented in kdbus are actually quite different from dbus1:
>
> Every connection to the bus has a memory pool assigned to store
> incoming messages and variably sized runtime data returned by kdbus.
> The pool memory is swappable, backed by a shmem file which is
> associated with the bus connection.
>
> Also, broadcasts are opt-in, so you only receive them if you
> subscribed for the specific signal. It is either sent by another
> userspace task, or by the kernel itself for things like name owner
> changes. In order to receive those, a connection must install a match.
> By default, no-one will receive any broadcasts.
>
> All types of messages (unicast and broadcast) are directly stored into
> a pool slice of the receiving connection, and this slice is not reused
> by the kernel until userspace is finished with it and frees it. Hence,
> a client which doesn't process its incoming messages will, at some
> point, run out of pool space. If that happens for unicast messages,
> the sender will get an EXFULL error. If it happens for a multicast
> message, all we can do is drop the message, and tell the receiver how
> many messages have been lost when it issues KDBUS_CMD_RECV the next
> time. There's more on that in kdbus.message(7).
>
> Also note that there is a quota logic in kdbus which protects against
> a single connection conducting a DOS against another one. Together
> with the policy code, this logic prevents one peer from flooding the
> pool of another peer. Communication with a 3rd party is not affected
> by this, due to the fair allocation scheme of the pool logic.
>
> All this is explained in detail in kdbus.pool(7), but please let us
> know if anything there is unclear.
>

This is neat, but it sounds like it will potentially add large amounts
of latency under even mild memory pressure.

Whose memcg does the pool use? If it's the receiver's, and if the
receiver can configure a memcg, then it seems that even a single
receiver could probably cause the sender to block for an unlimited
amount of time.

(And yes, I really hope that some day the cgroupns issues get resolved
and some programs really will be able to create their own cgroups,
even on systemd-using systems using the systemd-blessed
configuration.)

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/