[PATCH net-next 14/24] net: Add a lock which held during the redirect process.

From: Sebastian Andrzej Siewior
Date: Fri Dec 15 2023 - 12:14:24 EST


The XDP redirect process is two staged:
- bpf_prog_run_xdp() is invoked to run a eBPF program which inspects the
packet and makes decisions. While doing that, the per-CPU variable
bpf_redirect_info is used.

- Afterwards xdp_do_redirect() is invoked and accesses bpf_redirect_info
and it may also access other per-CPU variables like xskmap_flush_list.

At the very end of the NAPI callback, xdp_do_flush() is invoked which
does not access bpf_redirect_info but will touch the individual per-CPU
lists.

The per-CPU variables are only used in the NAPI callback hence disabling
bottom halves is the only protection mechanism. Users from preemptible
context (like cpu_map_kthread_run()) explicitly disable bottom halves
for protections reasons.
Without locking in local_bh_disable() on PREEMPT_RT this data structure
requires explicit locking.

Introduce redirect_lock as a lock to be acquired when access to these
per-CPU variables is performed. Usually the lock is part of the per-CPU
variable which is about to be protected but since there are a few
different per-CPU variables which need to be protected at the same
time (and some of the variables depend on a CONFIG setting) a new
per-CPU data structure with variable bpf_run_lock is used for this.

The lock is a nested-BH lock meaning that on non-PREEMPT_RT kernels this
simply results in a lockdep check and ensuring that bottom halves are
disabled. On PREEMPT_RT kernels this will provide the needed
synchronisation once local_bh_disable() does not act as per-CPU lock.

This patch introduces the bpf_run_lock.redirect_lock lock. It will be
used by drivers in the following patches.

A follow-up step could be to keep bpf_prog_run_xdp() and the
XDP_REDIRECT switch case (with xdp_do_redirect()) close together. That
would allow a single scoped_guard() macro to cover the two required
instaces that require locking instead the whole switch case.

Cc: Alexei Starovoitov <ast@xxxxxxxxxx>
Cc: Andrii Nakryiko <andrii@xxxxxxxxxx>
Cc: Hao Luo <haoluo@xxxxxxxxxx>
Cc: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
Cc: Jiri Olsa <jolsa@xxxxxxxxxx>
Cc: John Fastabend <john.fastabend@xxxxxxxxx>
Cc: KP Singh <kpsingh@xxxxxxxxxx>
Cc: Martin KaFai Lau <martin.lau@xxxxxxxxx>
Cc: Song Liu <song@xxxxxxxxxx>
Cc: Stanislav Fomichev <sdf@xxxxxxxxxx>
Cc: Yonghong Song <yonghong.song@xxxxxxxxx>
Cc: bpf@xxxxxxxxxxxxxxx
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
---
include/linux/bpf.h | 6 ++++++
net/core/filter.c | 5 +++++
2 files changed, 11 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cff5bb08820ec..6912b85209b12 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -291,6 +291,12 @@ struct bpf_map {
s64 __percpu *elem_count;
};

+struct bpf_run_lock {
+ local_lock_t redirect_lock;
+};
+
+DECLARE_PER_CPU(struct bpf_run_lock, bpf_run_lock);
+
static inline const char *btf_field_type_name(enum btf_field_type type)
{
switch (type) {
diff --git a/net/core/filter.c b/net/core/filter.c
index 896aa3fa699f9..7c9653734fb60 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -89,6 +89,11 @@
static const struct bpf_func_proto *
bpf_sk_base_func_proto(enum bpf_func_id func_id);

+DEFINE_PER_CPU(struct bpf_run_lock, bpf_run_lock) = {
+ .redirect_lock = INIT_LOCAL_LOCK(redirect_lock),
+};
+EXPORT_PER_CPU_SYMBOL_GPL(bpf_run_lock);
+
int copy_bpf_fprog_from_user(struct sock_fprog *dst, sockptr_t src, int len)
{
if (in_compat_syscall()) {
--
2.43.0