Re: [PATCH v2 1/3] unix: fix use-after-free in unix_dgram_poll()

From: Jason Baron
Date: Mon Oct 19 2015 - 11:07:47 EST


On 10/18/2015 04:58 PM, Rainer Weikusat wrote:

[...]

>
> The idea behind 'the wait queue' (insofar I'm aware of it) is that it
> will be used as list of threads who need to be notified when the
> associated event occurs. Since you seem to argue that the run-of-the-mill
> algorithm is too slow for this particular case, is there anything to
> back this up?
>

Generally the poll() routines only add to a wait queue once at the
beginning, and all subsequent calls to poll() simply check the wakeup
conditions. So here you are proposing to add/remove to the wait queue on
subsequent invocations of poll(). So the initial patch I did, continued
in the usual pattern and only added once on registration or connect().

However, I do think that this is a special case since the registration
is on a shared wait queue, and thus having a long list of registered
waiters is going to affect all waiters. So I am fine with doing the
add/removes in the poll() routine and I agree that the patch below is
more compact that what I initially posted.

A couple of notes on your patch:

1)

In unix_peer_wake_relay() function, 'sk_wq' is an __rcu pointer and thus
it requires proper dereferencing. Something like:

struct unix_sock *u;
struct socket_wq *wq;

u = container_of(wait, struct unix_sock, wait);
rcu_read_lock();
wq = rcu_dereference(u->sk.sk_wq);
if (wq_has_sleeper(wq))
wake_up_interruptible_sync_poll(&wq->wait, key);
rcu_read_unlock();


2)

For the case of epoll() in edge triggered mode we need to ensure that
when we return -EAGAIN from unix_dgram_sendmsg() when unix_recvq_full()
is true, we need to add a unix_peer_wake_connect() call to guarantee a
wakeup. Otherwise, we are going to potentially hang there.

With these changes (or tell me why they are not needed), I'm happy to
ack this patch.

Thanks,

-Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/