Re: [RFC] Unify KVM kernel-space and user-space code into a singleproject

From: Ingo Molnar
Date: Mon Mar 22 2010 - 07:40:00 EST



* Avi Kivity <avi@xxxxxxxxxx> wrote:

> On 03/21/2010 10:37 PM, Ingo Molnar wrote:
> >
> >>That includes the guest kernel. If you can deploy a new kernel in the
> >>guest, presumably you can deploy a userspace package.
> >
> > Note that with perf we can instrument the guest with zero guest-kernel
> > modifications as well.
> >
> > We try to reduce the guest impact to a bare minimum, as the difficulties
> > in deployment are function of the cross section surface to the guest.
> >
> > Also, note that the kernel is special with regards to instrumentation:
> > since this is the kernel project, we are doing kernel space changes, as we
> > are doing them _anyway_. So adding symbol resolution capabilities would be
> > a minimal addition to that - while adding a while new guest package for
> > the demon would significantly increase the cross section surface.
>
> It's true that for us, changing the kernel is easier than changing the rest
> of the guest. IMO we should still resist the temptation to go the easy path
> and do the right thing (I understand we disagree about what the right thing
> is).

It is not about the 'temptation to go the easy path'.

It is about finding the most pragmatic approach and realizing the cost of
inaction: sucky Linux, sucky KVM.

Let me give you an example: Linus's commit in v2.6.30 that changed the
user-space policy of the EXT3 filesystem to make it more desktop capable:

bbae8bc: ext3: make default data ordering mode configurable

That changes was opposed vehemently with your kind of arguments: "such changes
should be done by the distributions", "it should be done correctly", "the
kernel should not implement policy", etc..

I can also tell you that this commit improved my desktop experience
incredibly. Still, distros didnt do it for almost a decade of ext3 existence.
Why?

Truth is that those kinds of "do it right" arguments are mistaken because they
assume that we live in an ideal, 'perfect market' where all inefficiencies
will get eliminated in the long run.

In reality the "market" for OSS software is imperfect:

- there's marginal costs of action - a too small change has difficulty
getting over that

- there's costs of modularization (which are both technical and social)

- there's the power of the status quo acting against marginally good changes

- there's the power of entropy ripping Linux distributions apart making
all-distro changes harder

So the solution to the "why dont the distributions do this" question you pose
is exactly what i propose: _give a default, reference implementation of KVM
tooling that has to be eclipsed_.

There's the unique position of the kernel that it can impose sanity in a more
central way which acts as a reference implementation.

I.e. the kernel can very much improve quality all across the board by
providing a sane default (in the ext3 case) - or, as in the case of perf, by
providing a sane 'baseline' tooling. It should do the same for KVM as well.

If we dont do that, Linux will eventually stop mattering on the desktop - and
some time after that, it will vanish from the server space as well. Then, may
it be a decade down the line, you wont have a KVM hacking job left, and you
wont know where all those forces eliminating your project came from.

But i told you now so you'll know ;-)

Reality is, the server space never was and never will be self-sustaining in
the long run (as Novell has found it out with Netware), it is the desktop that
dictates future markets. This is why i find your views about this naive and
shortsighted.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/