Re: [PATCH v3 05/13] epoll: offload polling to a work in case of epfd polled from userspace

From: Eric Wong
Date: Tue May 21 2019 - 03:53:45 EST


Roman Penyaev <rpenyaev@xxxxxxx> wrote:
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 81da4571f1e0..9d3905c0afbf 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -44,6 +44,7 @@
> #include <linux/seq_file.h>
> #include <linux/compat.h>
> #include <linux/rculist.h>
> +#include <linux/workqueue.h>
> #include <net/busy_poll.h>
>
> /*
> @@ -185,6 +186,9 @@ struct epitem {
>
> /* The structure that describe the interested events and the source fd */
> struct epoll_event event;
> +
> + /* Work for offloading event callback */
> + struct work_struct work;
> };
>
> /*

Can we avoid the size regression for existing epoll users?

> @@ -2547,12 +2601,6 @@ static int __init eventpoll_init(void)
> ep_nested_calls_init(&poll_safewake_ncalls);
> #endif
>
> - /*
> - * We can have many thousands of epitems, so prevent this from
> - * using an extra cache line on 64-bit (and smaller) CPUs
> - */
> - BUILD_BUG_ON(sizeof(void *) <= 8 && sizeof(struct epitem) > 128);
> -
> /* Allocates slab cache used to allocate "struct epitem" items */
> epi_cache = kmem_cache_create("eventpoll_epi", sizeof(struct epitem),
> 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL);

Perhaps a "struct uepitem" transparent union and separate slab cache.