RE: [PATCH] random: use raw spinlocks for use on RT

From: David Laight
Date: Mon Aug 15 2022 - 06:27:06 EST


...
> So why don't we actually fix this, so we don't have to keep coming up
> with hacks? The question is: does using raw spinlocks over this code
> result in any real issue for RT latency? If so, I'd like to know where,
> and maybe I can do something about that (or maybe I can't). If not, then
> this is a non problem and I'll apply this patch with your blessing.
>
> If you don't want to spend time doing latency measurements, could you
> instead share a document or similar to the type of methodology you
> usually use for that, so I can do the same? And at the very least, I am
> simply curious and want to know more about the RT world.

I'd have thought that the majority of kernel spinlocks are
held for much less than the time taken to do a context switch.
So converting them to sleep locks is always going to harm
lock acquisition time.

The other problem the is inherent with sleep locks is priority
inversion.
I've have terrible problems getting a multithreaded RT application
to run reliably.
The main fix required removing ALL the mutex from the hot paths.
Basically you can't ensure that mutex are only held for short
periods (an ethernet hardware interrupt and then the softint code)
will mean the mutex is held for far too long.
This means you can't use linked lists unless they can be managed
with cmpxchg (etc).

Now there may be some kernel spinlocks that are held for 'too long'
but they should probably be changed anyway.

Making the kernel 'mostly pre-emptable' (which isn't really that
much different from SMP) would fix a lot of RT issues (like RT
processes not being scheduled because a low priority process
in running in kernel on the same cpu) without the complete
scheduling train-wreck of making every spinlock a sleeplock.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)