Re: [Linux Kernel Bugs] KASAN: slab-use-after-free Read in cec_queue_msg_fh and 4 other crashes in the cec device (`cec_ioctl`)

From: Hans Verkuil
Date: Thu Jan 18 2024 - 02:52:22 EST


On 18/01/2024 05:25, Zhao, Zijie wrote:
> Dear Developers,
>
> We hope this email finds you well. We took a deeper look at the first crash KASAN: slab-use-after-free Read in cec_queue_msg_fh. We believe the cause is that one thread took the lock of a `struct
> cec_fh` but another thread freed it:
>
> One thread takes the lock of the `fh` of type `struct cec_fh`first (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L219);
> Another thread frees this `fh` without checking if any other thread is holding the lock (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-api.c#L684);
> Then KASAN is triggered when the first thread tries to access `fh->msgs` (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L224).
>
>
> While this particular reproducer seems harmless, we think the free might cause more problems when paired with threads running other functions that work on `fh`and then KASAN is disabled. We also think
> the `struct cec_fh` (https://elixir.bootlin.com/linux/v6.7-rc7/source/include/media/cec.h#L90) is worth attention since it stores many function pointers (e.g. `fh->adap->ops` stores
> https://elixir.bootlin.com/linux/v6.7-rc7/source/include/media/cec.h#L115 and `fh->adap->pin->ops` stores https://elixir.bootlin.com/linux/v6.7-rc7/source/include/media/cec-pin.h#L36).
>
> Could you please kindly take a look at the crashes as you have more expertise in them?

I've been looking at these on and off whenever I have some time. I found two issues and am
on the trail of a third. Once I have a patch for the third I was planning to post the patches
and ask you to retest. Some of the issues you found might all relate to the same root cause
(esp. the locking issue), so it would be great if you could help with that.

Regards,

Hans

>
> Thank you for your time!
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* Yang, Chenyuan <cy54@xxxxxxxxxxxx>
> *Sent:* Wednesday, December 27, 2023 8:33 PM
> *To:* linux-media@xxxxxxxxxxxxxxx <linux-media@xxxxxxxxxxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx <linux-kernel@xxxxxxxxxxxxxxx>
> *Cc:* jani.nikula@xxxxxxxxx <jani.nikula@xxxxxxxxx>; hverkuil-cisco@xxxxxxxxx <hverkuil-cisco@xxxxxxxxx>; syzkaller@xxxxxxxxxxxxxxxx <syzkaller@xxxxxxxxxxxxxxxx>; mchehab@xxxxxxxxxx
> <mchehab@xxxxxxxxxx>; Zhao, Zijie <zijie4@xxxxxxxxxxxx>; Zhang, Lingming <lingming@xxxxxxxxxxxx>
> *Subject:* [Linux Kernel Bugs] KASAN: slab-use-after-free Read in cec_queue_msg_fh and 4 other crashes in the cec device (`cec_ioctl`)
>  
>
> Hello,
>
>  
>
> We encountered 5 different crashes in the cec device by using our generated syscall specification for it, here are the descriptions of these 5 crashes and the related files are attached:
>
> 1. KASAN: slab-use-after-free Read in cec_queue_msg_fh (Reproducible)
>
> 2. WARNING: ODEBUG bug in cec_transmit_msg_fh
>
> 3. WARNING in cec_data_cancel
>
> 4. INFO: task hung in cec_claim_log_addrs (Reproducible)
>
> 5. general protection fault in cec_transmit_done_ts
>
>  
>
> For “KASAN: slab-use-after-free Read in cec_queue_msg_fh”, we attached a syzkaller program to reproduce it. This crash is caused by ` list_add_tail(&entry->list, &fh->msgs);`
> (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L224 <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L224>), which reads a
> variable freed by `kfree(fh);` (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-api.c#L684
> <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-api.c#L684>). The reproducible program is a Syzkaller program, which can be executed following this document:
> https://github.com/google/syzkaller/blob/master/docs/executing_syzkaller_programs.md <https://github.com/google/syzkaller/blob/master/docs/executing_syzkaller_programs.md>.
>
>  
>
> For “WARNING: ODEBUG bug in cec_transmit_msg_fh”, unfortunately we failed to reproduce it but we indeed trigger this crash almost every time when we fuzz the cec device only. We attached the report
> and log for this bug. It tries freeing an active object by using `kfree(data);` (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L930
> <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L930>).
>
>  
>
> For “WARNING in cec_data_cancel”, it is an internal warning used in cec_data_cancel (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L365
> <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L365>), which checks whether the transmit is the current or pending. Unfortunately, we also don't have the
> reproducible program for this bug, but we attach the report and log.
>
>  
>
> For “INFO: task hung in cec_claim_log_addrs”, the kernel hangs when the cec device ` wait_for_completion(&adap->config_completion);`
> (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L1579 <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L1579>). We have a
> reproducible C program for this.
>
>  
>
> For “general protection fault in cec_transmit_done_ts”, the cec device tries derefencing a non-canonical address 0xdffffc00000000e0: 0000 [#1], which is related to the invocation `
> cec_transmit_attempt_done_ts ` (https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L697
> <https://elixir.bootlin.com/linux/v6.7-rc7/source/drivers/media/cec/core/cec-adap.c#L697>). It seems that the address of cec_adapter is totally wrong. We do not have a reproducible program for this
> bug, but the log and report for it are attached.
>
>  
>
> If you have any questions or require more information, please feel free to contact us.
>
>  
>
> Best,
>
> Chenyuan
>