Re: [PATCH] vmci_host: use smp_load_acquire/smp_store_release when accessing vmci_host_dev->ct_type

From: Yewon Choi
Date: Thu Nov 23 2023 - 02:49:32 EST


On Wed, Nov 22, 2023 at 02:34:55PM +0000, Greg Kroah-Hartman wrote:
> On Wed, Nov 22, 2023 at 09:20:08PM +0900, Yewon Choi wrote:
> > In vmci_host.c, missing memory barrier between vmci_host_dev->ct_type
> > and vmci_host_dev->context may cause uninitialized data access.
> >
> > One of possible execution flows is as follows:
> >
> > CPU 1 (vmci_host_do_init_context)
> > =====
> > vmci_host_dev->context = vmci_ctx_create(...) // 1
> > vmci_host_dev->ct_type = VMCIOBJ_CONTEXT; // 2
> >
> > CPU 2 (vmci_host_poll)
> > =====
> > if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) { // 3
> > context = vmci_host_dev->context; // 4
> > poll_wait(..., &context->host_context.wait_queue, ...);
> >
> > While ct_type serves as a flag indicating that context is initialized,
> > there is no memory barrier which prevents reordering between
> > 1,2 and 3, 4. So it is possible that 4 reads uninitialized
> > vmci_host_dev->context.
> > In this case, the null dereference occurs in poll_wait().
> >
> > In order to prevent this kind of reordering, we change plain accesses
> > to ct_type into smp_load_acquire() and smp_store_release().
> >
> > Signed-off-by: Yewon Choi <woni9911@xxxxxxxxx>
> > ---
> > drivers/misc/vmw_vmci/vmci_host.c | 40 ++++++++++++++++++-------------
> > 1 file changed, 23 insertions(+), 17 deletions(-)
> >
> > diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
> > index abe79f6fd2a7..e83b6e0fe55b 100644
> > --- a/drivers/misc/vmw_vmci/vmci_host.c
> > +++ b/drivers/misc/vmw_vmci/vmci_host.c
> > @@ -139,7 +139,7 @@ static int vmci_host_close(struct inode *inode, struct file *filp)
> > {
> > struct vmci_host_dev *vmci_host_dev = filp->private_data;
> >
> > - if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) {
> > + if (smp_load_acquire(&vmci_host_dev->ct_type) == VMCIOBJ_CONTEXT) {
>
> This is getting tricky, why not use a normal lock to ensure that all is
> safe? close isn't on a "fast path", so this shouldn't be a speed issue,
> right?
>

I think using locks can be considered orthogonal to correcting memory ordering.

As you pointed out, vmci_host_close is not a performance-critical function
while other functions using vmci_host_dev->context are performance-critical.
If the lock is needed, we will need to add locks in all of them. I cannot be
sure which is better. Besides that, it seems to be a separate issue.

On the other hand, the current implementation doesn't guarantee memory ordering
which leads to wrong behavior.
This patch fixes this issue by adding primitives.

Thank you for your reply.

Regards,
Yewon Choi

> thanks,
>
> greg k-h