Re: [PATCH tip 1/1] perf_counter tools: Add locking to perf top

From: Ingo Molnar
Date: Sat May 30 2009 - 05:33:26 EST



* Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> wrote:

> Em Fri, May 29, 2009 at 10:22:17PM +0200, Peter Zijlstra escreveu:
> > On Fri, 2009-05-29 at 17:03 -0300, Arnaldo Carvalho de Melo wrote:
> > > /* Sort the active symbols */
> > > - list_for_each_entry_safe(syme, n, &active_symbols, node) {
> > > - if (syme->count[0] != 0) {
> > > + pthread_mutex_lock(&active_symbols_lock);
> > > + syme = list_entry(active_symbols.next, struct sym_entry, node);
> > > + pthread_mutex_unlock(&active_symbols_lock);
> > > +
> > > + list_for_each_entry_safe_from(syme, n, &active_symbols, node) {
> > > + syme->snap_count = syme->count[0];
> > > + if (syme->snap_count != 0) {
> > > + syme->weight = sym_weight(syme);
> >
> > That looks wrong, you basically do a fancy cast while holding the lock,
> > then you overwrite the variable doing a list iteration without holding
> > the lock.
> >
> > If list_add and list_del are under a lock, the iteration should be too.
>
> Look closer :)
>
> 1) List insertion is only done at the head and by the other thread, thus
> the lock above. The other thread will only mess with the above
> syme->node.prev when inserting a new head, never with .next.
>
> 2) List deletion is only done after taking the lock, and on the above
> thread.
>
> Only problem probably is to access syme->count[0], that on some
> architectures may not be atomic.

as long as it's machine word aligned, the result of a read is atomic
on all SMP capable systems.

(It might still get reordered in an unpleasant way by either the
compiler or the CPU, so putting appropriate barriers there might be
handy.)

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/