Re: [ANNOUNCE][RFC] PlugSched-6.2 for 2.6.16-rc1 and 2.6.16-rc1-mm1

From: Peter Williams
Date: Sat Jan 21 2006 - 18:05:10 EST


Paolo Ornati wrote:
On Fri, 20 Jan 2006 08:45:43 +1100
Peter Williams <pwil3058@xxxxxxxxxxxxxx> wrote:


Modifications have been made to spa_ws to (hopefully) address the issues raised by Paolo Ornati recently and a new entitlement based interpretation of "nice" scheduler, spa_ebs, which is a cut down version of the Zaphod schedulers "eb" mode has been added as this mode of Zaphod performed will for Paolo's problem when he tried it at my request. Paolo, could you please give these a test drive on your problem?


---- spa_ws: the problem is still here

(sched_fooler)
./a.out 3000 & ./a.out 4307 &

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5573 paolo 34 0 2396 292 228 R 59.0 0.1 0:24.51 a.out
5572 paolo 34 0 2392 288 228 R 40.7 0.1 0:16.94 a.out
5580 paolo 35 0 4948 1468 372 R 0.3 0.3 0:00.04 dd

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5573 paolo 34 0 2396 292 228 R 59.3 0.1 0:59.65 a.out
5572 paolo 33 0 2392 288 228 R 40.3 0.1 0:41.32 a.out
5440 paolo 28 0 86652 21m 15m S 0.3 4.4 0:03.34 konsole
5580 paolo 37 0 4948 1468 372 R 0.3 0.3 0:00.10 dd


(real life - transcode)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5585 paolo 33 0 115m 18m 2432 S 90.0 3.7 0:38.04 transcode
5599 paolo 37 0 50996 4472 1872 R 9.1 0.9 0:04.03 tcdecode
5610 paolo 37 0 4948 1468 372 R 0.6 0.3 0:00.19 dd


DD test takes ages in both cases.

What exactly have you done to spa_ws?

I added a "nice aware" version of the throughput bonuses from spa_svr and renamed them fairness bonus. They don't appear to be working :-(

34 is the priority value that ordinary tasks should end up with i.e. if they don't look like interactive tasks or CPU hogs. If they look like interactive tasks they should get a lower one via the interactive bonus mechanism and if they look like CPU hogs they should get a higher one via the same mechanism. In addition to this tasks will get bonuses if they seem to be being treated unfairly i.e. spending too much time on run queues waiting for CPU access.

Looking at your numbers the transcode task has the priority that I'd expect it to have but tcdecode and dd seem to have had their priorities adjusted in the wrong direction. It's almost like they'd been (incorrectly, obviously) identified as CPU hogs :-(. I'll look into this.



---- spa_ebs: great! (as expected)

(sched_fooler)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5418 paolo 34 0 2392 288 228 R 51.4 0.1 1:06.47 a.out
5419 paolo 34 0 2392 288 228 R 43.7 0.1 0:54.60 a.out
5448 paolo 11 0 4952 1468 372 D 3.0 0.3 0:00.12 dd

(transcode)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5456 paolo 34 0 115m 18m 2432 R 51.9 3.7 0:23.34 transcode
5470 paolo 12 0 51000 4472 1872 S 5.7 0.9 0:02.38 tcdecode
5480 paolo 11 0 4948 1468 372 D 3.5 0.3 0:00.33 dd

Very good DD test performance in both cases.

Good. How do you find the interactive responsiveness with this one?

Thanks for testing
Peter
--
Peter Williams pwil3058@xxxxxxxxxxxxxx

"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/