Re: [PATCH net-next v4 00/18] net/smc: implement virtual ISM extension and loopback-ism

From: Niklas Schnelle
Date: Thu Oct 05 2023 - 11:47:20 EST


On Sun, 2023-09-24 at 23:16 +0800, Wen Gu wrote:
> Hi, all
>
> # Background
>
> SMC-D is now used in IBM z with ISM function to optimize network interconnect
> for intra-CPC communications. Inspired by this, we try to make SMC-D available
> on the non-s390 architecture through a software-simulated virtual ISM device,
> such as loopback-ism device here, to accelerate inter-process or inter-containers
> communication within the same OS.
>
> # Design
>
> This patch set includes 4 parts:
>
> - Patch #1-#3: decouple ISM device hard code from SMC-D stack.
> - Patch #4-#8: implement virtual ISM extension defined in SMCv2.1.
> - Patch #9-#13: implement loopback-ism device.
> - Patch #14-#18: memory copy optimization for the case using loopback.
>
> The loopback-ism device is designed as a kernel device and not be limited to
> a specific net namespace, ends of both inter-process connection (1/1' in diagram
> below) or inter-container connection (2/2' in diagram below) will find that peer
> shares the same loopback-ism device during the CLC handshake. Then loopback-ism
> device will be chosen.
>
> Container 1 (ns1) Container 2 (ns2)
> +-----------------------------------------+ +-------------------------+
> | +-------+ +-------+ +-------+ | | +-------+ |
> | | App A | | App B | | App C | | | | App D |<-+ |
> | +-------+ +---^---+ +-------+ | | +-------+ |(2') |
> | |127.0.0.1 (1')| |192.168.0.11 192.168.0.12| |
> | (1)| +--------+ | +--------+ |(2) | | +--------+ +--------+ |
> | `-->| lo |-` | eth0 |<-` | | | lo | | eth0 | |
> +---------+--|---^-+---+-----|--+---------+ +-+--------+---+-^------+-+
> | | | |
> Kernel | | | |
> +----+-------v---+-----------v----------------------------------+---+----+
> | | TCP | |
> | | | |
> | +--------------------------------------------------------------+ |
> | |
> | +--------------+ |
> | | smc loopback | |
> +---------------------------+--------------+-----------------------------+
>
>
> loopback-ism device allocs RMBs and sndbufs for each connection peer and 'moves'
> data from sndbuf at one end to RMB at the other end. Since communication occurs
> within the same kernel, the sndbuf can be mapped to peer RMB so that the data
> copy in loopback-ism case can be avoided.
>
> Container 1 (ns1) Container 2 (ns2)
> +-----------------------------------------+ +-------------------------+
> | +-------+ +-------+ +-------+ | | +-------+ |
> | | App A | | App B | | App C | | | | App D | |
> | +-------+ +--^----+ +-------+ | | +---^---+ |
> | | | | | | | |
> | (1) | (1') | (2) | | | (2') | |
> | | | | | | | |
> +-------|-----------|---------------|-----+ +------------|------------+
> | | | |
> Kernel | | | |
> +-------|-----------|---------------|-----------------------|------------+
> | +-----v-+ +-------+ +---v---+ +-------+ |
> | | snd A |-+ | RMB B |<--+ | snd C |-+ +->| RMB D | |
> | +-------+ | +-------+ | +-------+ | | +-------+ |
> | +-------+ | +-------+ | +-------+ | | +-------+ |
> | | RMB A | | | snd B | | | RMB C | | | | snd D | |
> | +-------+ | +-------+ | +-------+ | | +-------+ |
> | | +-------------v+ | |
> | +-------------->| smc loopback |---------+ |
> +---------------------------+--------------+-----------------------------+
>
> # Benchmark Test
>
> * Test environments:
> - VM with Intel Xeon Platinum 8 core 2.50GHz, 16 GiB mem.
> - SMC sndbuf/RMB size 1MB.
>
> * Test object:
> - TCP: run on TCP loopback.
> - domain: run on UNIX domain.
> - SMC lo: run on SMC loopback device.
>
> 1. ipc-benchmark (see [1])
>
> - ./<foo> -c 1000000 -s 100
>
> TCP SMC-lo
> Message
> rate (msg/s) 81539 151251(+85.50%)
>
> 2. sockperf
>
> - serv: <smc_run> taskset -c <cpu> sockperf sr --tcp
> - clnt: <smc_run> taskset -c <cpu> sockperf { tp | pp } --tcp --msg-size={ 64000 for tp | 14 for pp } -i 127.0.0.1 -t 30
>
> TCP SMC-lo
> Bandwidth(MBps) 5313.66 8270.51(+55.65%)
> Latency(us) 5.806 3.207(-44.76%)
>
> 3. nginx/wrk
>
> - serv: <smc_run> nginx
> - clnt: <smc_run> wrk -t 8 -c 1000 -d 30 http://127.0.0.1:80
>
> TCP SMC-lo
> Requests/s 194641.79 258656.13(+32.89%)
>
> 4. redis-benchmark
>
> - serv: <smc_run> redis-server
> - clnt: <smc_run> redis-benchmark -h 127.0.0.1 -q -t set,get -n 400000 -c 200 -d 1024
>
> TCP SMC-lo
> GET(Requests/s) 85855.34 115640.35(+34.69%)
> SET(Requests/s) 86337.15 118203.30(+36.90%)
>
> [1] https://github.com/goldsborough/ipc-bench
>

Hi Wen Gu,

I've been trying out your series with iperf3, qperf, and uperf on
s390x. I'm using network namespaces with a ConnectX VF from the same
card in each namespace for the initial TCP/IP connection i.e. initially
it goes out to a real NIC even if that can switch internally. All of
these look great for streaming workloads both in terms of performance
and stability. With a Connect-Request-Response workload and uperf
however I've run into issues. The test configuration I use is as
follows:

Client Command:

# host=$ip_server ip netns exec client smc_run uperf -m tcp_crr.xml

Server Command:

# ip netns exec server smc_run uperf -s &> /dev/null

Uperf tcp_crr.xml:

<?xml version="1.0"?>
<profile name="TCP_CRR">
<group nthreads="12">
<transaction duration="120">
<flowop type="connect" options="remotehost=$host protocol=tcp" />
<flowop type="write" options="size=200"/>
<flowop type="read" options="size=1000"/>
<flowop type="disconnect" />
</transaction>
</group>
</profile>

The workload first runs fine but then after about 4 GB of data
transferred fails with "Connection refused" and "Connection reset by
peer" errors. The failure is not permanent however and re-running
the streaming workloads run fine again (with both uperf server and
client restarted). So I suspect something gets stuck in either the
client or server sockets. The same workload runs fine with TCP/IP of
course.

Thanks,
Niklas