Re: [PATCH 05/36] HMM: introduce heterogeneous memory management v3.

From: Jerome Glisse
Date: Mon Jun 15 2015 - 10:32:38 EST


On Thu, Jun 11, 2015 at 03:26:46PM -0700, Mark Hairgrove wrote:
> On Thu, 11 Jun 2015, Jerome Glisse wrote:
> > On Wed, Jun 10, 2015 at 06:15:08PM -0700, Mark Hairgrove wrote:

[...]
> > Ok i see the race you are afraid of and really it is an unlikely one
> > __mutex_unlock_common_slowpath() take a spinlock right after allowing
> > other to take the mutex, when we are in your scenario there is no
> > contention on that spinlock so it is taken right away and as there
> > is no one in the mutex wait list then it goes directly to unlock the
> > spinlock and return. You can ignore the debug function as if debugging
> > is enabled than the mutex_lock() would need to also take the spinlock
> > and thus you would have proper synchronization btw 2 thread thanks to
> > the mutex.wait_lock.
> >
> > So basicly while CPU1 is going :
> > spin_lock(mutex.wait_lock)
> > if (!list_empty(mutex.wait_list)) {
> > // wait_list is empty so branch not taken
> > }
> > spin_unlock(mutex.wait_lock)
> >
> > CPU2 would have to test the mirror list and mutex_unlock and return
> > before the spin_unlock() of CPU1. This is a tight race, i can add a
> > synchronize_rcu() to device_unregister after the mutex_unlock() so
> > that we also add a grace period before the device is potentialy freed
> > which should make that race completely unlikely.
> >
> > Moreover for something really bad to happen it would need that the
> > freed memory to be reallocated right away by some other thread. Which
> > really sound unlikely unless CPU1 is the slowest of all :)
> >
> > Cheers,
> > Jérôme
> >
>
> But CPU1 could get preempted between the atomic_set and the
> spin_lock_mutex, and then it doesn't matter whether or not a grace period
> has elapsed before CPU2 proceeds.
>
> Making race conditions less likely just makes them harder to pinpoint when
> they inevitably appear in the wild. I don't think it makes sense to spend
> any effort in making a race condition less likely, and that thread I
> referenced (https://lkml.org/lkml/2013/12/2/997) is fairly strong evidence
> that fixing this race actually matters. So, I think this race condition
> really needs to be fixed.
>
> One fix is for hmm_mirror_unregister to wait for hmm_notifier_release
> completion between hmm_mirror_kill and hmm_mirror_unref. It can do this by
> calling synchronize_srcu() on the mmu_notifier's srcu. This has the
> benefit that the driver is guaranteed not to get the "mm is dead" callback
> after hmm_mirror_unregister returns.
>
> In fact, are there any callbacks on the mirror that can arrive after
> hmm_mirror_unregister? If so, how will hmm_device_unregister solve them?
>
> From a general standpoint, hmm_device_unregister must perform some kind of
> synchronization to be sure that all mirrors are completely released and
> done and no new callbacks will trigger. Since that has to be true, can't
> that synchronization be moved into hmm_mirror_unregister instead?
>
> If that happens there's no need for a "mirror can be freed" ->release
> callback at all because the driver is guaranteed that a mirror is done
> after hmm_mirror_unregister.

Well there is no need or 2 callback (relase|stop , free) just one, the
release|stop that is needed. I kind of went halfway last week on this.
I will probably rework that a little to keep just one call and rely on
driver to call hmm_mirror_unregister()

Cheers,
Jérôme
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/