Re: [RFC 4/4]x86: avoid tlbstate lock if no enough cpus

From: Eric Dumazet
Date: Wed Nov 03 2010 - 03:31:38 EST


Le mercredi 03 novembre 2010 Ã 08:25 +0100, Eric Dumazet a Ãcrit :
> Le mercredi 03 novembre 2010 Ã 15:19 +0800, Shaohua Li a Ãcrit :
> > On Wed, 2010-11-03 at 15:12 +0800, Eric Dumazet wrote:
> > > Le mercredi 03 novembre 2010 Ã 15:06 +0800, Shaohua Li a Ãcrit :
> > > > just don't want to include the non-present cpus here. I wonder why we
> > > > haven't a variable to record online cpu number.
> > >
> > > What prevents a 256 cpus machine, to have 8 online cpus that all use the
> > > same TLB vector ?
> > >
> > > (Max 32 vectors, so 8 cpus share each vector, settled at boot time)
> > >
> > > Forget about 'online', and think 'possible' ;)
> > Hmm, the spread vectors to node already merged, how could the 8 cpus
> > share a vector?
> >
>
> You boot a machine with 256 cpu.
>
> They are online and very well.
>
> Each vector is shared by at least 8 cpus, because 256/32 = 8. OK ?
>
> Now you off-line 256-8 cpus, because you have HOTPLUG capability in your
> kernel and you have some policy to bring them up later if needed.
>
> What happens ? Do you rebalance TLB vectors to make sure each cpu has
> its own vector ?
>
>

It seems you do that since commit 932967202182743 in
calculate_tlb_offset()

So just add in this function some logic to tell if each tlb vector is
used by one or several cpu.

We can avoid the lock for each TLB vector used by exactly one cpu.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/