Re: [patch] lowlatency patch for 2.4, lowlatency-2.4.0-test6-B5

From: Ingo Molnar (mingo@elte.hu)
Date: Fri Aug 04 2000 - 08:43:15 EST


On Fri, 4 Aug 2000, Andrew Morton wrote:

> - 4-5 millisec scheduling bumps caused by closing raw block devices.
> This may be related to sync_buffers(), but otherwise I suggest you not
> worry about this case: it's only lilo and hdparm, etc.

i dont see this, and i think i fixed all the sync_buffers() variants. I'll
take a look at it again.

> - 4-6 millisecs during X server startup. Again, not worth fixing IMO.

i dont see this here - but i remember that such latencies were caused by
psaux.

> - The infamous psaux thing - not worth fixing (Benno says 20 millisecs)

hm, i thought i fixed that. (see the pc_keyb.c changes) The same style of
fixes can be used in psaux.c as well.

> - I haven't tested fbdev - Pavel says it's bad.

thats pretty unsolvable (just as the serial console latencies), the
console layer assumes atomicity.

> - conditional_reschedule() is a kludge

well, is it your opinion that the assembly variant is still a kludge, what
do you think? 2(4) instructions for a conditional_schedule() isnt all that
bad.

> - only put rescheds in the places where we can clearly
> identify the reason - not just "it worked so I did it".

you presume i havent identified the reason in most cases?

> And I'd add:
>
> - Keep it clean: rescheds at the high levels. No dropping locks,
> obfuscating other people's code, etc.

*no*. There are places in the kernel that do work for millisecs while
holding a spinlock. This causes millisec latencies even if we had a
preemptive kernel. I'd use the word 'bounding' instead of 'obfuscating'.
In some cases (as i mentioned) the solutions are not *ahem* crystal-clear,
but that is going to be fixed, i just wanted to see how it works for
others.

yes, controlling latencies is in some cases just as painful as making
syscalls interruptible, but it's about the same kind of work. But in 80%
of the cases it's more straightforward than that.

> So given these criteria I suggest the approach is to only poke holes
> in the places which demonstrably and explicably need it. As you know,
> I've identified nine places, and these work well, except for the VM
> problem.

and dozens of other places. Believe me, your patch might work for light
load, but once you start loading the system even moderately, latency
sources pop up all around the place. 80% of the places i fixed are only
visible during moderate/heavy load.

> I don't think you should put _any_ rescheds in the VM! That's just
> covering up a problem. The VM should _not_ be spending 60 millisecs
> (you saw 200) just crunching on stuff. The approach should be to wait
> until the VM has been fixed and to then reevaluate the need for
> rescheds in there.

the VM *will* always spend 60 msecs (and more) crunching on stuff if there
is not enough RAM. Thats the way it goes - we have to scan page-tables,
look at the dentry/inode LRU lists, freeable pages will not pop up
suddenly from some magic list, etc. But that work can be done with good
latencies as well.

> Also, your assembler and .text.condsched changes are appropriate to
> the "no prisoners" patch, but they should not be used in the
> "uncontroversial" patch. Keep it simple and implement
> conditional_reschedule() in C. This is because:
>
> 1: If the code size is significant, the patch is wrong: it has too many
> rescheds

The major performance problem with conditional_schedule() was the impact
on icache footprint, and the untaken branch that it introduces (thus
polluting the BTB), now both issues are pretty much taken care of. I just
didnt want to worry about performance impact, and wanted to fully
concentrate on other aspects of the patch.

> 2: If the performance diff is significant, the patch is wrong:
> conditional_reschedule() is being called too often.

believe me, we do crazier things in assembly for less gain. It's these
little things that add up. The assembly implementation of
conditional_schedule() wasnt hard at all and is completely equivalent to
the C implementation, so i'm not sure why you are against it - i've taken
the time to code it up, and there is a C fallback if an architecture
decides to not bother about the assembly. No disadvantages.

> Consider this: with my minimal patch, with the workload being building
> a kernel whilst running `amlat' on UP, conditional_resched() is called
> 690 times per second and it actually does a schedule() twice a second.
> This is very good. The max latency was 800 usecs.

how much RAM do you have? What latencies do you get if it gets even under
minimal VM load. (which any consumer system will get under occasionally -
otherwise you've spent too much money on RAM.)

> The cost of executing
>
> if (current->need_resched)
>
> every 1.3 milliseconds is vanishingly small and it does not merit an
> assembly implementation.

look at it this way: the assembly implementation of conditional_schedule()
makes the cost very small, and thus we can start judging things based on
conceptual and implementational cleanness, not based on performance.

> What latency target are you shooting for? [...]

well, right now i'm fixing everything that i see to cause bigger than 1.0
msec latency on a 366 MHz UP Celeron, under heavy load. But some of the
more intrusive fixes will definitely be removed.

> My gut feel is that a worse-case latency of two millisecs will require
> no more than 15 conditional_rescheds.

you are dreaming, really. You think that the VM is the only latency source
under load, it *isnt*. After fixing VM latencies the show just begins.
Create a 100MB file and delete it. Repeat. Watch latencies. Allocate 100MB
of RAM, run top, kill program. Repeat. Watch latencies. Etc. 80% of the
fixes i add deal with 'moderate or high load', and scalability situations.

> Also, what are the conditional_rescheds in copy_*_user for? I haven't
> observed them doing anything very useful, maybe you have
> better/different test cases. I took them out of your patch and didn't
> observe any degradation.

this is just a generic place to put it, if you take a closer look i havent
added it to the constant copy functions, only to the 'variable size, thus
potentially large' memory operations. Ditto usercopy.c. Both triggered
latencies during bw_tcp over gigabit ethernet.

        Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Aug 07 2000 - 21:00:13 EST