Re: Kernel interface changes (was Re: cdrecord problems on

Khimenko Victor (khim@sch57.msk.ru)
Fri, 5 Feb 1999 04:13:46 +0300 (EET)


On Thu, 4 Feb 1999, Arvind Sankar wrote:

> On Thu, Feb 04, 1999 at 09:27:55PM +0300, Khimenko Victor wrote:
> > In <19990204091224.A31267@anjala.mit.edu> Arvind Sankar (arvinds@mit.edu) wrote:
> > > On Thu, Feb 04, 1999 at 09:07:27AM -0700, yodaiken@chelm.cs.nmt.edu wrote:
> > >> > Unfortunately, I think it is a problem you have to take up and deal with.
> > >> > Recompiling sources for entire server setups in a live production
> > >>
> > >> So you use the 2.0 version until a 2.2 version stabilizes. The problem
> > >> really is that the Linux unreliable development kernel is so good that
> > >> people actually want to run production systems on it, and then complain when
> > >> it does not stay stable.
> >
> > > 2.2 is supposed to _be_ stable, not gradually stabilize. That's what 2.1/2.3 are
> > > for.
> >
> > Unfortunatelly it's not possible. Joe average will not even try "development"
> > kernel and some errors could not be found without A LOT OF users.
>
> if it's not possible, why do we have a piece of docs that claims there is any
> difference between 2.1/2.2. For that matter, why was 2.2.0 not 2.1.133? Besides,
> the idea of debugging code by running it till it dies is inherently flawed.
> [The old adage: if architects built buildings like programmers write programs, the
> first woodpecker that came along would destroy civilization. Fortunately, that's not
> yet true :-)]
>
Still building are crashes sometimes (especially with earthqukes :-)...
Then new buildings (more "stable" :-) are build. Not much different from
software development :-))

> >
> > >>
> > >> > Besides, binaries are still the best way to get up and running as fast as
> > >> > possible. Waiting to bring up a replacement server because it's still
> > >>
> > >> Really? And what about waiting until the binary only patches get shipped by
> > >> the vendor. For those of use with experience on binary only systems, this
> > >> complaint is completely mysterious.
> >
> > > tell that to redhat/debian whoever.
> >
> > Huh. And why redhat/debian so struggle for "reproducible builds" (unlike
> > Slackware :-) ? This is way it's trivial to recreate new binary package when
> > needed with just `rpm --rebuild` (or whatever in Debian?). Of course it's not
> > only goal but `rpm --rebuild` is there for purpose !
>
> I was pointing out that the vast majority of ur joe average's actually use
> binary distributions built for them by somebody else. They have neither the
> knowledge nor the inclination to build their own. That is how redhat makes
> money. besides, rpm --rebuild sucks. can't split it up intelligently into
> little stages. For eg, ncurses-4.2 does not rebuild cleanly after installing
> glibc-2.1. You have to be more intelligent about it. Which is the problem. You
> can't expect everybody to keep figuring out what went wrong where after they
> rebuilt their supposedly stable kernel. If somebody finds that fun, they run
> the development kernels.
>
> [ The ncurses was just a random example. I don't blame anybody for it. I fully
> realize that glibc-2.1 is still a development version. ]
>
> >
> > Looks like few overlooked problems in 2.0.x kernels started all this fuzz :-((
> > Even NT where binary compatibility is fetish doing it (after NT 4.0 and
> > NT 4.0 SP3 are not 100% binary-compatible!) why you thing Linux is exception ?
>
> Look, I don't run NT because it sucks. I don't want linux descending to that level.
>
And WHY NT is sucks -- that is the question ? Tries to achieve
binary-compatibility is not the only reason to this but one of major
reasons: "we should use Win32 API which is total mess but will help to
migrate from WIn16 API to Win32 API, we should support DDE and thus any
program could bring computer down, etc, etc."

> >
> > Or they will outsource "stuff like this" or stuff will be thrown out in dust
> > and OSS replacement will be done eventually (even if this will not happens
> > this year or next year :-) We here have simple rule: "no software in kernel
> > mode or under user root on servers without available sources" and I think that
> > this trend will be expanding in the future. [VERY] Slowly but inevitable...
>
> duh? The MIT athena project has been around since the '80s. It's not going to go
> away anytime soon.

Hm. 15-20 years. Ok. May be this project will be thrown out in dust not in
2-3 years but in 15-20 years :-))

> Why should it? It's far easier to dump linux in favour of netbsd,
> which is also `OSS', as you put it.

If MIT will drop linux this will be problem for linux for short time (few
years) but th will be bigger problem for MIT in the future.

> Besides, what would you do with all those sources?

What *I* will do with sources ? Nothing perhaps. But it's far better to
use something which is per-reviewed by quite a few independent groups :-)

> You have any idea how much time it takes to rebuild a server?

Not. But why you should do this in hurry ?

> Or worse, try to figure out which bits need rebuilding and which don't?

Just rebuild all if you are insure...

> And in an environment where the maximum downtime acceptable is 0?
>
Hm. How you could switch kernel in such environment at all in not clear to me.

> >
> > Why you think so ? IMO "no drivers" or "no AFS support" or "no USB support"
> > is WAY better then "binary-only drivers", "AFS support via binary-module"
> > and "USB with precompiled kernel blob".
> >
> >
>
> duh? You seriously believe that? nothing is better than something?
>
Yes. If there was no OSS (not open source here :-) we'll had ALSA (or
something similar) already incorporated in kernel and well polished. What
you prefer: not to use car at all or use car without brakes ?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/