Re: [PATCH v2 1/2] mm: kmem: fix a NULL pointer dereference in obj_stock_flush_required()

From: Yosry Ahmed
Date: Tue May 02 2023 - 16:16:03 EST


On Tue, May 2, 2023 at 9:09 AM Roman Gushchin <roman.gushchin@xxxxxxxxx> wrote:
>
> KCSAN found an issue in obj_stock_flush_required():
> stock->cached_objcg can be reset between the check and dereference:
>
> ==================================================================
> BUG: KCSAN: data-race in drain_all_stock / drain_obj_stock
>
> write to 0xffff888237c2a2f8 of 8 bytes by task 19625 on cpu 0:
> drain_obj_stock+0x408/0x4e0 mm/memcontrol.c:3306
> refill_obj_stock+0x9c/0x1e0 mm/memcontrol.c:3340
> obj_cgroup_uncharge+0xe/0x10 mm/memcontrol.c:3408
> memcg_slab_free_hook mm/slab.h:587 [inline]
> __cache_free mm/slab.c:3373 [inline]
> __do_kmem_cache_free mm/slab.c:3577 [inline]
> kmem_cache_free+0x105/0x280 mm/slab.c:3602
> __d_free fs/dcache.c:298 [inline]
> dentry_free fs/dcache.c:375 [inline]
> __dentry_kill+0x422/0x4a0 fs/dcache.c:621
> dentry_kill+0x8d/0x1e0
> dput+0x118/0x1f0 fs/dcache.c:913
> __fput+0x3bf/0x570 fs/file_table.c:329
> ____fput+0x15/0x20 fs/file_table.c:349
> task_work_run+0x123/0x160 kernel/task_work.c:179
> resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
> exit_to_user_mode_loop+0xcf/0xe0 kernel/entry/common.c:171
> exit_to_user_mode_prepare+0x6a/0xa0 kernel/entry/common.c:203
> __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
> syscall_exit_to_user_mode+0x26/0x140 kernel/entry/common.c:296
> do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> read to 0xffff888237c2a2f8 of 8 bytes by task 19632 on cpu 1:
> obj_stock_flush_required mm/memcontrol.c:3319 [inline]
> drain_all_stock+0x174/0x2a0 mm/memcontrol.c:2361
> try_charge_memcg+0x6d0/0xd10 mm/memcontrol.c:2703
> try_charge mm/memcontrol.c:2837 [inline]
> mem_cgroup_charge_skmem+0x51/0x140 mm/memcontrol.c:7290
> sock_reserve_memory+0xb1/0x390 net/core/sock.c:1025
> sk_setsockopt+0x800/0x1e70 net/core/sock.c:1525
> udp_lib_setsockopt+0x99/0x6c0 net/ipv4/udp.c:2692
> udp_setsockopt+0x73/0xa0 net/ipv4/udp.c:2817
> sock_common_setsockopt+0x61/0x70 net/core/sock.c:3668
> __sys_setsockopt+0x1c3/0x230 net/socket.c:2271
> __do_sys_setsockopt net/socket.c:2282 [inline]
> __se_sys_setsockopt net/socket.c:2279 [inline]
> __x64_sys_setsockopt+0x66/0x80 net/socket.c:2279
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> value changed: 0xffff8881382d52c0 -> 0xffff888138893740
>
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 1 PID: 19632 Comm: syz-executor.0 Not tainted 6.3.0-rc2-syzkaller-00387-g534293368afa #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
>
> Fix it by using READ_ONCE()/WRITE_ONCE() for all accesses to
> stock->cached_objcg.

I believe all read accesses other than obj_stock_flush_required() are
done under the lock, so READ_ONCE() wouldn't be needed AFAICT. Having
READ_ONCE() only around the racy read can be useful to document the
racy read and differentiate it from others.

With that said, it's also inconvenient to keep track moving forward of
which reading sites are racy, and it may be simpler to just annotate
all readers with READ_ONCE().

I am not sure which approach is better, just thinking out loud.

>
> Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API")
> Reported-by: syzbot+774c29891415ab0fd29d@xxxxxxxxxxxxxxxxxxxxxxxxx
> Reported-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
> Link:
> https://lore.kernel.org/linux-mm/CACT4Y+ZfucZhM60YPphWiCLJr6+SGFhT+jjm8k1P-a_8Kkxsjg@xxxxxxxxxxxxxx/T/#t
> Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx>

With the above said, I don't feel strongly either way, the patch looks
good AFAICT:
Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>

> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
> ---
> mm/memcontrol.c | 19 ++++++++++---------
> 1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 4b27e245a055..c823c35c2ed4 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3208,12 +3208,12 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> * accumulating over a page of vmstat data or when pgdat or idx
> * changes.
> */
> - if (stock->cached_objcg != objcg) {
> + if (READ_ONCE(stock->cached_objcg) != objcg) {
> old = drain_obj_stock(stock);
> obj_cgroup_get(objcg);
> stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes)
> ? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0;
> - stock->cached_objcg = objcg;
> + WRITE_ONCE(stock->cached_objcg, objcg);
> stock->cached_pgdat = pgdat;
> } else if (stock->cached_pgdat != pgdat) {
> /* Flush the existing cached vmstat data */
> @@ -3267,7 +3267,7 @@ static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
> local_lock_irqsave(&memcg_stock.stock_lock, flags);
>
> stock = this_cpu_ptr(&memcg_stock);
> - if (objcg == stock->cached_objcg && stock->nr_bytes >= nr_bytes) {
> + if (objcg == READ_ONCE(stock->cached_objcg) && stock->nr_bytes >= nr_bytes) {
> stock->nr_bytes -= nr_bytes;
> ret = true;
> }
> @@ -3279,7 +3279,7 @@ static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
>
> static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock)
> {
> - struct obj_cgroup *old = stock->cached_objcg;
> + struct obj_cgroup *old = READ_ONCE(stock->cached_objcg);
>
> if (!old)
> return NULL;
> @@ -3332,7 +3332,7 @@ static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock)
> stock->cached_pgdat = NULL;
> }
>
> - stock->cached_objcg = NULL;
> + WRITE_ONCE(stock->cached_objcg, NULL);
> /*
> * The `old' objects needs to be released by the caller via
> * obj_cgroup_put() outside of memcg_stock_pcp::stock_lock.
> @@ -3343,10 +3343,11 @@ static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock)
> static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
> struct mem_cgroup *root_memcg)
> {
> + struct obj_cgroup *objcg = READ_ONCE(stock->cached_objcg);
> struct mem_cgroup *memcg;
>
> - if (stock->cached_objcg) {
> - memcg = obj_cgroup_memcg(stock->cached_objcg);
> + if (objcg) {
> + memcg = obj_cgroup_memcg(objcg);
> if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))
> return true;
> }
> @@ -3365,10 +3366,10 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
> local_lock_irqsave(&memcg_stock.stock_lock, flags);
>
> stock = this_cpu_ptr(&memcg_stock);
> - if (stock->cached_objcg != objcg) { /* reset if necessary */
> + if (READ_ONCE(stock->cached_objcg) != objcg) { /* reset if necessary */
> old = drain_obj_stock(stock);
> obj_cgroup_get(objcg);
> - stock->cached_objcg = objcg;
> + WRITE_ONCE(stock->cached_objcg, objcg);
> stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes)
> ? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0;
> allow_uncharge = true; /* Allow uncharge when objcg changes */
> --
> 2.40.1
>