Re: [PATCH v2] LoongArch: Add jump-label implementation

From: Youling Tang
Date: Wed May 10 2023 - 21:32:47 EST


Hi, Peter

On 05/10/2023 05:27 PM, Peter Zijlstra wrote:
On Wed, May 10, 2023 at 05:16:46PM +0800, Youling Tang wrote:
Add jump-label implementation based on the ARM64 version.

Signed-off-by: Youling Tang <tangyouling@xxxxxxxxxxx>

diff --git a/arch/loongarch/include/asm/jump_label.h b/arch/loongarch/include/asm/jump_label.h
new file mode 100644
index 000000000000..2f9fdec256c5
--- /dev/null
+++ b/arch/loongarch/include/asm/jump_label.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2023 Loongson Technology Corporation Limited
+ *
+ * Based on arch/arm64/include/asm/jump_label.h
+ */
+#ifndef __ASM_JUMP_LABEL_H
+#define __ASM_JUMP_LABEL_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+#define JUMP_LABEL_NOP_SIZE 4
+
+static __always_inline bool arch_static_branch(struct static_key * const key,
+ const bool branch)
+{
+ asm_volatile_goto(
+ "1: nop \n\t"
+ " .pushsection __jump_table, \"aw\" \n\t"
+ " .align 3 \n\t"
+ " .long 1b - ., %l[l_yes] - . \n\t"
+ " .quad %0 - . \n\t"
+ " .popsection \n\t"
+ : : "i"(&((char *)key)[branch]) : : l_yes);
+
+ return false;
+l_yes:
+ return true;
+}
+
+static __always_inline bool arch_static_branch_jump(struct static_key * const key,
+ const bool branch)
+{
+ asm_volatile_goto(
+ "1: b %l[l_yes] \n\t"
+ " .pushsection __jump_table, \"aw\" \n\t"
+ " .align 3 \n\t"
+ " .long 1b - ., %l[l_yes] - . \n\t"
+ " .quad %0 - . \n\t"
+ " .popsection \n\t"
+ : : "i"(&((char *)key)[branch]) : : l_yes);
+
+ return false;
+l_yes:
+ return true;
+}

Seems simple enough; one change I did a while ago for the x86 version is
to put the __jump_table entry generation in a macro so it could be
shared between the (3 for x86) variants.

Looks better, I will define JUMP_TABLE_ENTRY macro so that
arch_static_branch and arch_static_branch_jump can share.

Thanks,
Youling.


Not saying you have to do that, just saying it's an option.

+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_JUMP_LABEL_H */

diff --git a/arch/loongarch/kernel/jump_label.c b/arch/loongarch/kernel/jump_label.c
new file mode 100644
index 000000000000..b06245955f7a
--- /dev/null
+++ b/arch/loongarch/kernel/jump_label.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2023 Loongson Technology Corporation Limited
+ *
+ * Based on arch/arm64/kernel/jump_label.c
+ */
+#include <linux/jump_label.h>
+#include <linux/kernel.h>
+#include <asm/inst.h>
+
+void arch_jump_label_transform(struct jump_entry *entry,
+ enum jump_label_type type)
+{
+ void *addr = (void *)jump_entry_code(entry);
+ u32 insn;
+
+ if (type == JUMP_LABEL_JMP)
+ insn = larch_insn_gen_b(jump_entry_code(entry), jump_entry_target(entry));
+ else
+ insn = larch_insn_gen_nop();
+
+ larch_insn_patch_text(addr, insn);
+}

This all implies Loongarch is fine with the nop<->b transition (much
like arm64 is), but I found no actual mention of what transitions are
valid for the architecture in your inst.c file -- perhaps you could put
a small comment there to elucidate the occasional reader that doesn't
have your arch manual memorized?


Anyway, as with most RISC implementations it's short and sweet.

Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>