Suppose: We have a numbercrunching application running mostly on CPU1.
Cache-1 is "hot", cache-2 is "cold". CPU1 is currently executing some
triviality, CPU2 is idle. And suppose we have 1M cache on the CPUs.
So suppose we schedule the number crunching application onto CPU2.
DIMMs are 64 bits right? Do they run in something like 6-1-1-1 timing?
So we get 32 bytes into the cache in 100 ns (rounding a bit). This is
a 100ns stall, and wether the application "heats up the cache" all at
once after starting, or wether it happens over a (much) longer period
of time doesn't matter: in the end it will have waited
1M bytes cache size / 32 bytes/cache-line-fetch * 100ns = 3.3ms.
for the cache to heat up.
So, idling the other processor would be advantageous if the
"triviality" takes less than about 3ms.
To get better than just guessing, you would have to keep a cache of
"average running time" based on for example the name of the process.
So a shell, who just does:
read (1, buf, 1024);
if (!fork ())
exec (buf);
else
wait ();
will most likely have an average running time less than 3ms. So
pushing the number cruncher over is not a good idea. But after say
one more time-slot, (10ms running time), the odds are against that.
Roger.
-- ** R.E.Wolff@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2137555 ** *-- BitWizard writes Linux device drivers for any device you may have! --* ------ Microsoft SELLS you Windows, Linux GIVES you the whole house ------
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/