Re: [PATCH 11/18] mm: fix TLB flush race between migration, andchange_protection_range

From: Mel Gorman
Date: Wed Dec 11 2013 - 06:21:48 EST


On Tue, Dec 10, 2013 at 10:02:08AM -0800, Paul E. McKenney wrote:
> > > Should this be smp_mb__after_unlock_lock(); ?
> >
> > I think this is still ok. Minimally, it's missing the unlock/lock pair that
> > would cause smp_mb__after_unlock_lock() to be treated as a full barrier
> > on architectures that care. The CPU executing this code as already seen
> > the pmd_numa update if it's in the fault handler so it just needs to be
> > sure to not reorder the check with respect to the page copy.
>
> You really do need a lock operation somewhere shortly before the
> smp_mb__after_unlock_lock().
>

My badly phrased point was that there was no unlock/lock operation nearby
that needs to be ordered with respect to the tlb_flush_pending check. I
do not see a need for smp_mb__after_unlock_lock() here and just this
hunk is required.

> > index c122bb1..33e5519 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -482,7 +482,12 @@ static inline bool tlb_flush_pending(struct mm_struct *mm)
> > static inline void set_tlb_flush_pending(struct mm_struct *mm)
> > {
> > mm->tlb_flush_pending = true;
> > - barrier();
> > +
> > + /*
> > + * Guarantee that the tlb_flush_pending store does not leak into the
> > + * critical section updating the page tables
> > + */
> > + smp_mb_before_spinlock();
> > }
> > /* Clearing is done after a TLB flush, which also provides a barrier. */
> > static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> >

A double check would be nice please.

--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/