Re: Overscheduling DOES happen with high web server load.

Phillip Ezolt (ezolt@perf.zko.dec.com)
Fri, 7 May 1999 10:11:37 -0400 (EDT)


On Fri, 7 May 1999, Andrea Arcangeli wrote:

> On Thu, 6 May 1999, Phillip Ezolt wrote:
>
> >Although this would probably speed up the code, the underlying problem
> >is still there. (The linear search for the next process) The patch basically
>
> I really don't think the linear search is a big issue. You had at _max_ 90
> task running at the same time. I think the big issue is to avoid the not
> needed schedule(). If you avoid them you drop from 40000 schedule/sec to
> 3000 schedule/sec...

Ok, you are right. The real problem is we are calculating goodness
O(A*B).

A= Number of processes on the runqueue
B= Number of times schedule is called

The real answer is to cut out all unnecessary work. If we can decrease
B significantly, it may almost be irrelavent how long A takes.

If you look closely, as the test ramps up, the number of overschedules
DOES drop to around 3000 schedule/sec. I think that the 40000 happens when
the machine is mostly idle. (Compare the id column with the cs column).

However 3000 is still too much, no?

> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
>
> 0 0 0 0 226872 1544 9816 0 0 0 0 1099 39056 2 2 96
> 0 0 0 0 226872 1544 9816 0 0 0 0 1082 39054 1 2 96
> 0 0 0 0 226872 1544 9816 0 0 0 0 1079 39118 2 2 96
> 0 0 0 0 226872 1544 9816 0 0 0 1 1099 39116 2 2 96
> 0 30 0 0 224744 1616 9816 0 0 75 0 1519 35529 4 8 89
> 0 29 0 0 223120 1672 10376 0 0 451 0 1369 34011 6 8 86
> 0 30 0 0 221968 1744 10776 0 0 344 0 1370 32861 4 9 87
> 8 32 0 0 219312 1816 11208 0 0 399 0 1401 27527 6 10 84
> 0 37 0 0 216648 1864 11984 0 0 406 0 1516 22204 8 13 79
> 0 57 0 0 210360 1920 12944 0 0 643 0 1603 13209 14 18 68
> 4 85 0 0 198544 1976 14048 0 0 730 0 1774 7218 20 30 49
> 0 96 0 0 187520 2016 15176 0 0 743 0 1783 5522 20 34 47
> 0 93 0 0 175776 2048 16632 0 0 1156 14 1993 3728 22 42 37
> 0 96 0 0 173080 2088 18392 0 0 1388 6 2037 4427 14 33 53
> 0 89 0 0 171296 2128 20056 0 0 1365 3 2068 4655 12 34 54
> 0 92 0 0 169960 2160 21176 0 0 840 3 1971 4445 13 32 55
> 0 94 0 0 168320 2192 22720 0 0 1213 2 2036 4314 14 32 54
> 0 86 0 0 166584 2224 24256 0 0 1310 3 2158 4194 13 37 50
> 1 82 0 0 164504 2248 26144 0 0 1539 3 2250 3879 15 37 48
> 0 88 0 0 162992 2296 27488 0 0 1073 3 2232 3799 16 37 47
> 0 87 0 0 161264 2336 29128 0 0 1284 4 2356 4200 16 35 49
> 0 85 0 0 158936 2368 31136 0 0 1817 2 2230 4457 14 34 52
> 12 71 0 0 157096 2400 32632 0 0 1328 3 2304 3636 16 39 46
> 0 79 0 0 155168 2440 34464 0 0 1599 2 2351 3985 15 38 47
> 0 87 0 0 153432 2480 35840 0 0 1299 3 2291 3705 17 38 45
> 3 70 0 0 150880 2520 38088 0 0 1948 2 2416 4069 16 37 47
> 0 72 0 0 148496 2552 40336 0 0 2013 4 2731 3902 17 39 44
> 0 79 0 0 146976 2600 41720 0 0 1154 2 2626 3539 18 41 41
> 17 73 0 0 144952 2648 43704 0 0 1886 2 2445 3487 18 41 42
> 0 79 0 0 143056 2688 45464 0 0 1595 2 2211 3856 14 39 47
> 0 76 0 0 140192 2728 47920 0 0 2284 2 2880 3059 20 46 35
> 0 79 0 0 138832 2768 49224 0 0 1242 3 2442 3681 16 41 43
> 0 70 0 0 136288 2816 51544 0 0 2171 4 3014 3583 20 41 38
> 15 64 0 0 134432 2872 53176 0 0 1466 3 2875 3007 20 45 35
> 0 67 0 0 132448 2928 54984 0 0 1690 3 3134 2712 22 48 30
> 5 63 0 0 130704 2984 56656 0 0 1519 3 2825 3006 23 41 36
> 0 70 0 0 127936 3040 58952 0 0 2070 2 3159 2584 23 48 29

>
> And using an heap would impact all cases where the machine is not
> overloaded but it has only 5/6 tasks running all the time.
>
> BTW, Is your http client freely available?

Hmph. It is the SPECWeb96 client. Unfortunately, it is not freely available.
Check out http://www.spec.org/ for more info.

It might make sense for Redhat or someone to purchase a copy for system
performance testing. It actually might be able to head of some of this
mindcraft hoopla.

>
> Andrea Arcangeli
>
>

--Phil

Digital/Compaq: HPSD/Benchmark Performance Engineering
Phillip.Ezolt@compaq.com ezolt@perf.zko.dec.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/