Re: [RFC PATCH 00/86] Make the kernel preemptible

From: Peter Zijlstra
Date: Wed Nov 08 2023 - 05:14:21 EST


On Wed, Nov 08, 2023 at 02:04:02AM -0800, Ankur Arora wrote:

> >> Revert "riscv: support PREEMPT_DYNAMIC with static keys"
> >> Revert "livepatch,sched: Add livepatch task switching to
> >> cond_resched()"
> >> Revert "arm64: Support PREEMPT_DYNAMIC"
> >> Revert "sched/preempt: Add PREEMPT_DYNAMIC using static keys"
>
> What's the best way to handle these? With the lazy bit, cond_resched()
> and might_resched() are gone. So we don't need all of the static
> key inftrastructure for toggling etc.
>
> The part of preempt_dynamic that makes sense to me is the one that
> switches dynamically between none/voluntary/full. Here it would need
> to be wired onto controls of the lazy bit.
> (Right now the preemption policy is controlled by sched_feat in
> patches 43, and 44 but sched/preempt is a much better interface.)

I'm not understanding, those should stay obviously.

The current preempt_dynamic stuff has 5 toggles:

/*
* SC:cond_resched
* SC:might_resched
* SC:preempt_schedule
* SC:preempt_schedule_notrace
* SC:irqentry_exit_cond_resched
*
*
* NONE:
* cond_resched <- __cond_resched
* might_resched <- RET0
* preempt_schedule <- NOP
* preempt_schedule_notrace <- NOP
* irqentry_exit_cond_resched <- NOP
*
* VOLUNTARY:
* cond_resched <- __cond_resched
* might_resched <- __cond_resched
* preempt_schedule <- NOP
* preempt_schedule_notrace <- NOP
* irqentry_exit_cond_resched <- NOP
*
* FULL:
* cond_resched <- RET0
* might_resched <- RET0
* preempt_schedule <- preempt_schedule
* preempt_schedule_notrace <- preempt_schedule_notrace
* irqentry_exit_cond_resched <- irqentry_exit_cond_resched
*/

If you kill voluntary as we know it today, you can remove cond_resched
and might_resched, but the remaining 3 are still needed to switch
between NONE and FULL.

Additionally, you'll get one new state to enable/disable the LAZY stuff.
Neither NONE nor FULL want the LAZY thing on.

You'll then end up with something like:

/*
* SK:preempt_lazy
* SC:preempt_schedule
* SC:preempt_schedule_notrace
* SC:irqentry_exit_cond_resched
*
*
* NONE:
* preempt_lazy <- OFF
* preempt_schedule <- NOP
* preempt_schedule_notrace <- NOP
* irqentry_exit_cond_resched <- NOP
*
* VOLUNTARY:
* preempt_lazy <- ON
* preempt_schedule <- preempt_schedule
* preempt_schedule_notrace <- preempt_schedule_notrace
* irqentry_exit_cond_resched <- irqentry_exit_cond_resched
*
* FULL:
* preempt_lazy <- OFF
* preempt_schedule <- preempt_schedule
* preempt_schedule_notrace <- preempt_schedule_notrace
* irqentry_exit_cond_resched <- irqentry_exit_cond_resched
*/

For the architectures that do not have static_call but instead use
static_key for everything, the SC's are obviously static_key based
wrappers around the function calls -- like now.