Re: [RFC PATCH 1/2] Marker probes in futex.c

From: Mathieu Desnoyers
Date: Tue Apr 15 2008 - 09:47:22 EST


* Ingo Molnar (mingo@xxxxxxx) wrote:
>
> * Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
>
> > > Because we extract the field names and types, we can create tracer
> > > plugins that would hook on field names rather than expect a specific
> > > number of fields and fixed field types. It makes it possible to
> > > tolerate missing fields pretty easily. But yes, tracer tools might
> > > have to be adapted to internal kernel changes, since they must
> > > follow kernel structure changes. However, staying as close as
> > > possible to a canonical representation of event fields, staying far
> > > from the specific implemetation, would help to lessen the
> > > inter-dependency. On the other hand, it would probably hurt trace
> > > compactness and efficiency.
> >
> > See, these tracer tools are my nightmare as member of an enterprise
> > linux team. They'll make an already hard job even harder, no thanks!
>
> i'm clearly NAK-ing all futex trace hooks until the true impact of the
> whole marker facility is better understood. I've tried them for the
> scheduler and they were a clear failure: too bloated and too persistent.
>

I have not seen any counter argument for the in-depth analysis of the
instruction cache impact of the optimized markers I've done. Arguing
that the markers are "bloated" based only on "size kernel/sched.o"
output is a bit misleading.

> but more importantly, as things stand today i've yet to see a _any_
> bugreport where these 'tracer' tools that are being referred to were
> actually used in the field to fix something. The latency tracers (and
> the other tracer variants in -rt) on the other hand have a documented
> track record of being useful in fixing bugs and instrumenting the
> kernel.
>

You will probably be interested in the following paper, which explains
various situations in which using a tracer has solved real problems at
Google, IBM, Autodesk, which are Linux users running large clusters or
Linux systems with soft RT constraints.

Linux Kernel Debugging on Google-sized clusters at Ottawa Linux Symposium 2007
http://ltt.polymtl.ca/papers/bligh-Reprint.pdf

Now for some performance impact :

Here are some results I have taken comparing the optimized markers
approach with the dynamic ftrace approach. These runs with some ALU work
in tight loops, using clflush() to flush the cache lines pointing to
"global" data (pointer read : current->pid) used in the loop. I also
have the numbers for running the loop without the ALU work, but I leave
them out since they only make the tables harder to read : basically, the
cached impact for running the empty loop with markers or ftrace
instrumentation is about 0 to 3 cycles. It's the uncached impact which
clearly makes the difference between both approaches.

On AMD64, adding the markers or ftrace statement actually accelerates
the runs when executed with an ALU work baseline. It adds 1 to 2 cycles
with executed alone in the loop without any work.

Frank Ch. Eigler is preparing some macrobenchmarks. I hope he will find
time to post them soon.

Results in cycles per loop

baseline :
Cycles for ALU loop 28.10013
(will be substracted for cached runs)

Cycles for clflush() and ALU loop 230.11087
(will be substracted from non-cached runs)

gcc version 4.1.3 20070812 (prerelease) (Debian 4.1.2-15), -O2

------------------------------------------------------------------------------
|x86 Pentium 4, 3.0GHz, Linux 2.6.25-rc7 | cached | uncached |
------------------------------------------------------------------------------
|Added cycles for optimized marker | 0.002 | 0.07 |
|Added cycles for normal marker | 0.004 | 154.7 |
|Added cycles for stack setup + (1+4 bytes) NOPs | | |
|(6 local vars) | 0.035 | 0.6 |
|Added cycles for stack setup + (1+4 bytes) NOPs | | |
|(1 pointer read, 5 local vars) | 0.030 | 222.8 |
------------------------------------------------------------------------------

Results in cycles per loop

baseline :
Cycles for ALU and loop 25.32369
(will be substracted for cached runs)

Cycles for clflush() and ALU loop 118.24227
(will be substracted from non-cached runs)

gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21), -O2

------------------------------------------------------------------------------
|AMD64, 2.0GHz, Linux 2.6.25-rc7 | cached | uncached |
------------------------------------------------------------------------------
|Added cycles for optimized marker | -1.0 | 0.2 |
|Added cycles for normal marker | -0.3 | 41.8 |
|Added cycles for stack setup + (1+4 bytes) NOPs | -0.5 | 0.01 |
|(6 local vars) | | |
|Added cycles for stack setup + (1+4 bytes) NOPs | 2.7 | 51.8 |
|(1 pointer read, 5 local vars) | | |
------------------------------------------------------------------------------

test bench at : http://ltt.polymtl.ca/svn/markers-test/

Regards,

Mathieu

--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/