Re: EEVDF and NUMA balancing

From: Julia Lawall
Date: Sat Nov 11 2023 - 07:57:06 EST


A small update.

Attached are graphs of the running times of 50 runs, including 6.5 and 6.6, and graphs showing the events in slow runs with 6.5 and 6.5.

NUMA balancing is shown in dark green lines connecting the source and the destination. Yellow lines indicate load balancing within a socket. The cores are renumbered so that adjacent ones are on the same socket.

There is actually more NUMA balancing in these runs in 6.5 than in 6.6. In general, I don't see a correlation between the amount of NUMA balancing and the running time. I have the impression that rather the NUMA balancing leads to some socket being overloaded, and then attempts to resolve that continually fail.

The only hint of a solution that I have so far is that the timeslice length may be related. Or rather the fact that all of the timeslices have length 1 tick. In runtimes.png I also show what happens if I take 6.5 and replace the return value of sched_slice by 1. The results are not exactly the same as 6.6 (green vs pink), but they do end up in the same place. And it may be related to the fact that the time slices are all the same, but to the fact that they are all short, because making sched_slice return 8 gives the same trend, but even worse.

julia

Attachment: runtimes.png
Description: PNG image

Attachment: ua.C.x_yeti-3_6.5.0_performance_2_ev.pdf
Description: Adobe PDF document

Attachment: ua.C.x_yeti-3_6.6.0_performance_10_ev.pdf
Description: Adobe PDF document