Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenariosin PLE handler

From: Raghavendra K T
Date: Fri Sep 21 2012 - 13:40:34 EST


On 09/21/2012 06:48 PM, Chegu Vinod wrote:
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly,

Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.

An idea to solve this is:
1) As Avi had proposed we can modify hardware ple_window
dynamically to avoid frequent PL-exit.

Yes. We had to do this to get around some scaling issues for large
(>20way) guests (with no overcommitment)

Do you mean you already have some solution tested for this?


As part of some experimentation we even tried "switching off" PLE too :(


Honestly,
Your this experiment and Andrew Theurer's observations were the
motivation for this patch.



(IMHO, it is difficult to
decide when we have mixed type of VMs).

Agree.

Not sure if the following alternatives have also been looked at :

- Could the behavior associated with the "ple_window" be modified to be
a function of some [new] per-guest attribute (which can be conveyed to
the host as part of the guest launch sequence). The user can choose to
set this [new] attribute for a given guest. This would help avoid the
frequent exits due to PLE (as Avi had mentioned earlier) ?

Ccing Drew also. We had a good discussion on this idea last time.
(sorry that I forgot to include in patch series)

May be a good idea when we know the load in advance..


- Can the PLE feature ( in VT) be "enhanced" to be made a per guest
attribute ?


IMHO, the approach of not taking a frequent exit is better than taking
an exit and returning back from the handler etc.

I entirely agree on this point. (though have not tried above
approaches). Hope to see more expert opinions pouring in.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/