Re: [PATCH v5 6/7] sched/deadline: Deferrable dl server

From: Steven Rostedt
Date: Tue Nov 07 2023 - 12:54:36 EST



What's more interesting, when looking at the userspin task, I see a lot of this:

migrate-1153 [003] 1272.988097: sched_switch: migrate:1153 [90] S ==> userspin:1135 [120]
userspin-1135 [003] 1272.988111: sched_switch: userspin:1135 [120] R ==> migrate:1146 [97]

userspin sneaks in for 14 microseconds

migrate-1146 [003] 1272.988141: sched_switch: migrate:1146 [97] R ==> migrate:1159 [84]
migrate-1159 [003] 1272.988159: print: tracing_mark_write: thread 13 iter 15, took lock 15020 in 140726333419648 us
migrate-1159 [003] 1272.992161: print: tracing_mark_write: thread 13 iter 15, unlock lock 6
migrate-1159 [003] 1272.992169: print: tracing_mark_write: thread 13 iter 15 sleeping
migrate-1159 [003] 1272.992177: sched_switch: migrate:1159 [84] S ==> userspin:1135 [120]
userspin-1135 [003] 1272.992190: sched_switch: userspin:1135 [120] R ==> migrate:1150 [93]

Again for 13 microseconds.

migrate-1150 [003] 1272.995118: sched_switch: migrate:1150 [93] R ==> migrate:1153 [90]
migrate-1153 [003] 1272.995129: print: tracing_mark_write: thread 7 iter 15, taking lock 5
migrate-1153 [003] 1272.995164: print: tracing_mark_write: thread 7 iter 15, took lock 32 in 140726333419648 us
migrate-1153 [003] 1273.005166: print: tracing_mark_write: thread 7 iter 15, unlock lock 5
migrate-1153 [003] 1273.005174: print: tracing_mark_write: thread 7 iter 15 sleeping
migrate-1153 [003] 1273.005183: sched_switch: migrate:1153 [90] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.005204: sched_switch: userspin:1135 [120] R ==> migrate:1159 [84]

For 21 microseconds.

migrate-1159 [003] 1273.005216: print: tracing_mark_write: thread 13 iter 15, taking lock 7
migrate-1159 [003] 1273.005271: print: tracing_mark_write: thread 13 iter 15, took lock 53 in 140726333419648 us
migrate-1159 [003] 1273.009273: print: tracing_mark_write: thread 13 iter 15, unlock lock 7
migrate-1159 [003] 1273.009281: print: tracing_mark_write: thread 13 iter 15 sleeping
migrate-1159 [003] 1273.009289: sched_switch: migrate:1159 [84] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.009301: sched_switch: userspin:1135 [120] R ==> migrate:1147 [96]

12 microseconds

migrate-1147 [003] 1273.012205: sched_switch: migrate:1147 [96] R ==> migrate:1153 [90]
migrate-1153 [003] 1273.012217: print: tracing_mark_write: thread 7 iter 15, taking lock 6
migrate-1153 [003] 1273.012228: sched_switch: migrate:1153 [90] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.012242: sched_switch: userspin:1135 [120] R ==> migrate:1146 [97]
migrate-1146 [003] 1273.014251: sched_switch: migrate:1146 [97] R ==> migrate:1148 [95]

2 milliseconds. (which is probably fine).

migrate-1148 [003] 1273.020300: print: tracing_mark_write: thread 2 iter 14, unlock lock 2
migrate-1148 [003] 1273.020302: print: tracing_mark_write: thread 2 iter 14 sleeping
migrate-1148 [003] 1273.020309: sched_switch: migrate:1148 [95] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.020324: sched_switch: userspin:1135 [120] R ==> migrate:1147 [96]

15 microseconds.

migrate-1147 [003] 1273.020360: print: tracing_mark_write: thread 1 iter 14, unlock lock 1
migrate-1147 [003] 1273.020373: print: tracing_mark_write: thread 1 iter 14 sleeping
migrate-1147 [003] 1273.020381: sched_switch: migrate:1147 [96] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.021397: sched_switch: userspin:1135 [120] R ==> migrate:1147 [96]

1 millisecond.

migrate-1147 [003] 1273.021402: print: tracing_mark_write: thread 1 iter 14, taking lock 2
migrate-1147 [003] 1273.021404: print: tracing_mark_write: thread 1 iter 14, took lock 1 in 140726333419648 us
migrate-1147 [003] 1273.022200: sched_switch: migrate:1147 [96] R ==> migrate:1152 [91]
migrate-1152 [003] 1273.022206: print: tracing_mark_write: thread 6 iter 15, taking lock 6
migrate-1152 [003] 1273.022217: sched_switch: migrate:1152 [91] S ==> migrate:1147 [96]
migrate-1147 [003] 1273.022289: sched_switch: migrate:1147 [96] R ==> migrate:1159 [84]
migrate-1159 [003] 1273.022299: print: tracing_mark_write: thread 13 iter 16, taking lock 0
migrate-1159 [003] 1273.022326: print: tracing_mark_write: thread 13 iter 16, took lock 25 in 140726333419648 us
migrate-1159 [003] 1273.026328: print: tracing_mark_write: thread 13 iter 16, unlock lock 0
migrate-1159 [003] 1273.026337: print: tracing_mark_write: thread 13 iter 16 sleeping
migrate-1159 [003] 1273.026346: sched_switch: migrate:1159 [84] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.026359: sched_switch: userspin:1135 [120] R ==> migrate:1146 [97]

13 microseconds, and so on...

migrate-1146 [003] 1273.027170: sched_switch: migrate:1146 [97] R ==> migrate:1149 [94]
migrate-1149 [003] 1273.027189: print: tracing_mark_write: thread 3 iter 14, took lock 1927 in 140726333419648 us
migrate-1149 [003] 1273.027335: sched_switch: migrate:1149 [94] R ==> migrate:1153 [90]
migrate-1153 [003] 1273.027349: print: tracing_mark_write: thread 7 iter 15, took lock 15130 in 140726333419648 us
migrate-1153 [003] 1273.037352: print: tracing_mark_write: thread 7 iter 15, unlock lock 6
migrate-1153 [003] 1273.037362: print: tracing_mark_write: thread 7 iter 15 sleeping
migrate-1153 [003] 1273.037370: sched_switch: migrate:1153 [90] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.037395: sched_switch: userspin:1135 [120] R ==> migrate:1147 [96]
migrate-1147 [003] 1273.037406: print: tracing_mark_write: thread 1 iter 14, unlock lock 2
migrate-1147 [003] 1273.037408: print: tracing_mark_write: thread 1 iter 14 sleeping
migrate-1147 [003] 1273.037414: sched_switch: migrate:1147 [96] S ==> userspin:1135 [120]
userspin-1135 [003] 1273.038428: sched_switch: userspin:1135 [120] R ==> migrate:1147 [96]


It looks like it sneaks in when it's about to schedule a new RT task.

Is this expected?

-- Steve