Re: [PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop

From: Jason Wang
Date: Thu Jul 27 2023 - 05:14:14 EST


On Thu, Jul 27, 2023 at 2:10 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
>
> On Thu, Jul 27, 2023 at 02:03:59PM +0800, Jason Wang wrote:
> > On Wed, Jul 26, 2023 at 7:38 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Jul 26, 2023 at 09:55:37AM +0800, Jason Wang wrote:
> > > > On Tue, Jul 25, 2023 at 3:36 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Tue, Jul 25, 2023 at 11:07:40AM +0800, Jason Wang wrote:
> > > > > > On Mon, Jul 24, 2023 at 3:17 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Mon, Jul 24, 2023 at 02:52:05PM +0800, Jason Wang wrote:
> > > > > > > > On Sat, Jul 22, 2023 at 4:18 AM Maxime Coquelin
> > > > > > > > <maxime.coquelin@xxxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On 7/21/23 17:10, Michael S. Tsirkin wrote:
> > > > > > > > > > On Fri, Jul 21, 2023 at 04:58:04PM +0200, Maxime Coquelin wrote:
> > > > > > > > > >>
> > > > > > > > > >>
> > > > > > > > > >> On 7/21/23 16:45, Michael S. Tsirkin wrote:
> > > > > > > > > >>> On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin wrote:
> > > > > > > > > >>>>
> > > > > > > > > >>>>
> > > > > > > > > >>>> On 7/20/23 23:02, Michael S. Tsirkin wrote:
> > > > > > > > > >>>>> On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson wrote:
> > > > > > > > > >>>>>> On 7/20/23 1:38 AM, Jason Wang wrote:
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> Adding cond_resched() to the command waiting loop for a better
> > > > > > > > > >>>>>>> co-operation with the scheduler. This allows to give CPU a breath to
> > > > > > > > > >>>>>>> run other task(workqueue) instead of busy looping when preemption is
> > > > > > > > > >>>>>>> not allowed on a device whose CVQ might be slow.
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>> This still leaves hung processes, but at least it doesn't pin the CPU any
> > > > > > > > > >>>>>> more. Thanks.
> > > > > > > > > >>>>>> Reviewed-by: Shannon Nelson <shannon.nelson@xxxxxxx>
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>
> > > > > > > > > >>>>> I'd like to see a full solution
> > > > > > > > > >>>>> 1- block until interrupt
> > > > > > > >
> > > > > > > > I remember in previous versions, you worried about the extra MSI
> > > > > > > > vector. (Maybe I was wrong).
> > > > > > > >
> > > > > > > > > >>>>
> > > > > > > > > >>>> Would it make sense to also have a timeout?
> > > > > > > > > >>>> And when timeout expires, set FAILED bit in device status?
> > > > > > > > > >>>
> > > > > > > > > >>> virtio spec does not set any limits on the timing of vq
> > > > > > > > > >>> processing.
> > > > > > > > > >>
> > > > > > > > > >> Indeed, but I thought the driver could decide it is too long for it.
> > > > > > > > > >>
> > > > > > > > > >> The issue is we keep waiting with rtnl locked, it can quickly make the
> > > > > > > > > >> system unusable.
> > > > > > > > > >
> > > > > > > > > > if this is a problem we should find a way not to keep rtnl
> > > > > > > > > > locked indefinitely.
> > > > > > > >
> > > > > > > > Any ideas on this direction? Simply dropping rtnl during the busy loop
> > > > > > > > will result in a lot of races. This seems to require non-trivial
> > > > > > > > changes in the networking core.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > From the tests I have done, I think it is. With OVS, a reconfiguration
> > > > > > > > > is performed when the VDUSE device is added, and when a MLX5 device is
> > > > > > > > > in the same bridge, it ends up doing an ioctl() that tries to take the
> > > > > > > > > rtnl lock. In this configuration, it is not possible to kill OVS because
> > > > > > > > > it is stuck trying to acquire rtnl lock for mlx5 that is held by virtio-
> > > > > > > > > net.
> > > > > > > >
> > > > > > > > Yeah, basically, any RTNL users would be blocked forever.
> > > > > > > >
> > > > > > > > And the infinite loop has other side effects like it blocks the freezer to work.
> > > > > > > >
> > > > > > > > To summarize, there are three issues
> > > > > > > >
> > > > > > > > 1) busy polling
> > > > > > > > 2) breaks freezer
> > > > > > > > 3) hold RTNL during the loop
> > > > > > > >
> > > > > > > > Solving 3 may help somehow for 2 e.g some pm routine e.g wireguard or
> > > > > > > > even virtnet_restore() itself may try to hold the lock.
> > > > > > >
> > > > > > > Yep. So my feeling currently is, the only real fix is to actually
> > > > > > > queue up the work in software.
> > > > > >
> > > > > > Do you mean something like:
> > > > > >
> > > > > > rtnl_lock();
> > > > > > queue up the work
> > > > > > rtnl_unlock();
> > > > > > return success;
> > > > > >
> > > > > > ?
> > > > >
> > > > > yes
> > > >
> > > > We will lose the error reporting, is it a real problem or not?
> > >
> > > Fundamental isn't it? Maybe we want a per-device flag for a asynch commands,
> > > and vduse will set it while hardware virtio won't.
> > > this way we only lose error reporting for vduse.
> >
> > This problem is not VDUSE specific, DPUs/vDPA may suffer from this as
> > well. This might require more thoughts.
> >
> > Thanks
>
> They really shouldn't - any NIC that takes forever to
> program will create issues in the networking stack.

Unfortunately, it's not rare as the device/cvq could be implemented
via firmware or software.

> But if they do they can always set this flag too.

This may have false negatives and may confuse the management.

Maybe we can extend the networking core to allow some device specific
configurations to be done with device specific lock without rtnl. For
example, split the set_channels to

pre_set_channels
set_channels
post_set_channels

The device specific part could be done in pre and post without a rtnl lock?

Thanks



>
> > >
> > > > >
> > > > >
> > > > > > > It's mostly trivial to limit
> > > > > > > memory consumption, vid's is the
> > > > > > > only one where it would make sense to have more than
> > > > > > > 1 command of a given type outstanding.
> > > > > >
> > > > > > And rx mode so this implies we will fail any command if the previous
> > > > > > work is not finished.
> > > > >
> > > > > don't fail it, store it.
> > > >
> > > > Ok.
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > > > have a tree
> > > > > > > or a bitmap with vids to add/remove?
> > > > > >
> > > > > > Probably.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >>>>> 2- still handle surprise removal correctly by waking in that case
> > > > > > > >
> > > > > > > > This is basically what version 1 did?
> > > > > > > >
> > > > > > > > https://lore.kernel.org/lkml/6026e801-6fda-fee9-a69b-d06a80368621@xxxxxxxxxx/t/
> > > > > > > >
> > > > > > > > Thanks
> > > > > > >
> > > > > > > Yes - except the timeout part.
> > > > > > >
> > > > > > >
> > > > > > > > > >>>>>
> > > > > > > > > >>>>>
> > > > > > > > > >>>>>
> > > > > > > > > >>>>>>> ---
> > > > > > > > > >>>>>>> drivers/net/virtio_net.c | 4 +++-
> > > > > > > > > >>>>>>> 1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > > > >>>>>>> index 9f3b1d6ac33d..e7533f29b219 100644
> > > > > > > > > >>>>>>> --- a/drivers/net/virtio_net.c
> > > > > > > > > >>>>>>> +++ b/drivers/net/virtio_net.c
> > > > > > > > > >>>>>>> @@ -2314,8 +2314,10 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd,
> > > > > > > > > >>>>>>> * into the hypervisor, so the request should be handled immediately.
> > > > > > > > > >>>>>>> */
> > > > > > > > > >>>>>>> while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> > > > > > > > > >>>>>>> - !virtqueue_is_broken(vi->cvq))
> > > > > > > > > >>>>>>> + !virtqueue_is_broken(vi->cvq)) {
> > > > > > > > > >>>>>>> + cond_resched();
> > > > > > > > > >>>>>>> cpu_relax();
> > > > > > > > > >>>>>>> + }
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> return vi->ctrl->status == VIRTIO_NET_OK;
> > > > > > > > > >>>>>>> }
> > > > > > > > > >>>>>>> --
> > > > > > > > > >>>>>>> 2.39.3
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> _______________________________________________
> > > > > > > > > >>>>>>> Virtualization mailing list
> > > > > > > > > >>>>>>> Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
> > > > > > > > > >>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> > > > > > > > > >>>>>
> > > > > > > > > >>>
> > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >
>