Re: [PATCH bpf-next 3/3] bpf, arm64: use bpf_jit_binary_pack_alloc

From: Puranjay Mohan
Date: Mon Jun 05 2023 - 16:20:46 EST


On Mon, Jun 5, 2023 at 10:13 PM Song Liu <song@xxxxxxxxxx> wrote:
>
> On Mon, Jun 5, 2023 at 11:34 AM Puranjay Mohan <puranjay12@xxxxxxxxx> wrote:
> >
> > Hi,
> >
> > On Mon, Jun 5, 2023 at 7:05 PM Song Liu <song@xxxxxxxxxx> wrote:
> > >
> > > On Mon, Jun 5, 2023 at 12:40 AM Puranjay Mohan <puranjay12@xxxxxxxxx> wrote:
> > > >
> > > > Use bpf_jit_binary_pack_alloc for memory management of JIT binaries in
> > > > ARM64 BPF JIT. The bpf_jit_binary_pack_alloc creates a pair of RW and RX
> > > > buffers. The JIT writes the program into the RW buffer. When the JIT is
> > > > done, the program is copied to the final ROX buffer
> > > > with bpf_jit_binary_pack_finalize.
> > > >
> > > > Implement bpf_arch_text_copy() and bpf_arch_text_invalidate() for ARM64
> > > > JIT as these functions are required by bpf_jit_binary_pack allocator.
> > > >
> > > > Signed-off-by: Puranjay Mohan <puranjay12@xxxxxxxxx>
> > > > ---
> > > > arch/arm64/net/bpf_jit_comp.c | 119 +++++++++++++++++++++++++++++-----
> > > > 1 file changed, 102 insertions(+), 17 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> > > > index 145b540ec34f..ee9414cadea8 100644
> > > > --- a/arch/arm64/net/bpf_jit_comp.c
> > > > +++ b/arch/arm64/net/bpf_jit_comp.c
> > > > @@ -76,6 +76,7 @@ struct jit_ctx {
> > > > int *offset;
> > > > int exentry_idx;
> > > > __le32 *image;
> > > > + __le32 *ro_image;
> > >
> > > We are using:
> > > image vs. ro_image
> > > rw_header vs. header
> > > rw_image_ptr vs. image_ptr
> >
> > Will use "rw_image" and "image" in the next version.
> >
> > >
> > > Shall we be more consistent with rw_ or ro_ prefix?
> > >
> > > > u32 stack_size;
> > > > int fpb_offset;
> > > > };
> > > > @@ -205,6 +206,20 @@ static void jit_fill_hole(void *area, unsigned int size)
> > > > *ptr++ = cpu_to_le32(AARCH64_BREAK_FAULT);
> > > > }
> > > >
> > > > +int bpf_arch_text_invalidate(void *dst, size_t len)
> > > > +{
> > > > + __le32 *ptr;
> > > > + int ret;
> > > > +
> > > > + for (ptr = dst; len >= sizeof(u32); len -= sizeof(u32)) {
> > > > + ret = aarch64_insn_patch_text_nosync(ptr++, AARCH64_BREAK_FAULT);
> > >
> > > I think one aarch64_insn_patch_text_nosync() per 4 byte is too much overhead.
> > > Shall we add a helper to do this in bigger patches?
> >
> > What would be the most efficient way to build this helper? As arm64 doesn't
> > have the __text_poke() API. Calling copy_to_kernel_nofault() in a loop might
> > not be the best way. One way would be to use __put_kernel_nofault() directly.
> >
> > Also, what should we call this helper? aarch64_insn_memset() ?
>
> I just found aarch64_insn_patch_text_cb() also calls
> aarch64_insn_patch_text_nosync() in a loop. So it is probably OK as-is?

Okay, then we can go ahead with this.

Another thing about the consistency of rw_ and ro_ prefix.
The ctx->image is used all over the place in the JIT, so changing it would
require a lot of changes. Therefore the naming convention that I will follow is
"image" and "ro_image". By this naming convention, ctx->image can be left
untouched and only ro_image would be used at some places like:
- prog->bpf_func = (void *)ctx.image;
+ prog->bpf_func = (void *)ctx.ro_image;
etc.

I will use this in the next version of the patch.

Thanks,
Puranjay