Re: [PATCH v2 net-next] net: introduce SO_RCVBUFAUTO to let the rcv_buf tune automatically

From: Jason Xing
Date: Wed Feb 16 2022 - 02:07:58 EST


On Wed, Feb 16, 2022 at 2:25 PM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>
> On Tue, Feb 15, 2022 at 9:03 PM <kerneljasonxing@xxxxxxxxx> wrote:
> >
> > From: Jason Xing <xingwanli@xxxxxxxxxxxx>
> >
> > Normally, user doesn't care the logic behind the kernel if they're
> > trying to set receive buffer via setsockopt. However, once the new
> > value of the receive buffer is set even though it's not smaller than
> > the initial value which is sysctl_tcp_rmem[1] implemented in
> > tcp_rcv_space_adjust(),, the server's wscale will shrink and then
> > lead to the bad bandwidth as intended.
>
> Quite confusing changelog, honestly.
>
> Users of SO_RCVBUF specifically told the kernel : I want to use _this_
> buffer size, I do not want the kernel to decide for me.
>
> Also, I think your changelog does not really explain that _if_ you set
> SO_RCVBUF to a small value before
> connect() or in general the 3WHS, the chosen wscale will be small, and
> this won't allow future 10x increase
> of the effective RWIN.
>

Yes, you hit the point really.

>
> >
> > For now, introducing a new socket option to let the receive buffer
> > grow automatically no matter what the new value is can solve
> > the bad bandwidth issue meanwhile it's not breaking the application
> > with SO_RCVBUF option set.
> >
> > Here are some numbers:
> > $ sysctl -a | grep rmem
> > net.core.rmem_default = 212992
> > net.core.rmem_max = 40880000
> > net.ipv4.tcp_rmem = 4096 425984 40880000
> >
> > Case 1
> > on the server side
> > # iperf -s -p 5201
> > on the client side
> > # iperf -c [client ip] -p 5201
> > It turns out that the bandwidth is 9.34 Gbits/sec while the wscale of
> > server side is 10. It's good.
> >
> > Case 2
> > on the server side
> > #iperf -s -p 5201 -w 425984
> > on the client side
> > # iperf -c [client ip] -p 5201
> > It turns out that the bandwidth is reduced to 2.73 Gbits/sec while the
> > wcale is 2, even though the receive buffer is not changed at all at the
> > very beginning.
> >
> > After this patch is applied, the bandwidth of case 2 is recovered to
> > 9.34 Gbits/sec as expected at the cost of consuming more memory per
> > socket.
>
> How does your patch allow wscale to increase after flow is established ?
>
> I would remove from the changelog these experimental numbers that look
> quite wrong,
> maybe copy/pasted from your prior version.
>

My fault. I should have removed this part.

> Instead I would describe why an application might want to clear the
> 'receive buffer size is locked' socket attribute.
>
> >
> > Signed-off-by: Jason Xing <xingwanli@xxxxxxxxxxxx>
> > --
> > v2: suggested by Eric
> > - introduce new socket option instead of breaking the logic in SO_RCVBUF
> > - Adjust the title and description of this patch
> > link: https://lore.kernel.org/lkml/CANn89iL8vOUOH9bZaiA-cKcms+PotuKCxv7LpVx3RF0dDDSnmg@xxxxxxxxxxxxxx/
> > ---
> >
>
> I think adding another parallel SO_RCVBUF option is not good. It is
> adding confusion (and net/core/filter.c has been unchanged)

I'll change the filter.c altogether in the next submission.

>
> Also we want CRIU to work correctly.
>
> So if you have a SO_XXXX setsockopt() call, you also need to provide
> getsockopt() implementation.
>
> I would suggest an option to clear or set SOCK_RCVBUF_LOCK, and
> getsockopt() would return if the bit is currently set or not.
>
> Something clearly describing the intent, like SO_RCVBUF_LOCK maybe.

Just now, I found out that the latest kernel has merged a similar
patch (commit 04190bf89) about three months ago.

Is it still necessary to add another separate option to clear the
SOCK_RCVBUF_LOCK explicitly?

Thanks,
Jason