=?ISO-8859-1?Q?R=15e:?= Kernel interface change

Lin Zhe Min (ljm@marx.ljm.wownet.net)
Fri, 5 Feb 1999 02:44:59 +0000 (GMT)


It's for me somewhat guity to send message w/ so many unnecessary carbon
copies... That's somewhat like spamming. :Q

On Thu, 4 Feb 1999, Arvind Sankar wrote:

> if it's not possible, why do we have a piece of docs that claims there is any
> difference between 2.1/2.2. For that matter, why was 2.2.0 not 2.1.133? Besides,
> the idea of debugging code by running it till it dies is inherently flawed.

However we just cannot step most `invisible' bugs with a proper software
engineering (or we don't need no longer debuggers. :-) That's why Linux
get this motto of 'release more, and more quickly'. If you have yet read
``The Catheral and the Bazaar'', you would not have proposed such a
question. :-) It can be found at
http://www.tuxedo.org/~esr/writings/cathedral-bazaar

I don't think binary compatiblity is _so_ important, even in stable
releases' period. (Since I'm an OSS fan...:) What should be more
important is _source level compatibility_ in a stable period. If you can
recompile one w/o any workaround, that's okay. Building is never
something hard (though we don't have such a convinient thing like "make
world". Would authors of autoconf take this into consideration?) But
to me, hacking codes to get it compilable in a newer release is much more
boresome. I'd keep each patch and the batch file to compile one programme
in CD in order that I can easily rebuild a whole system after one
probable disaster... :-(

And I don't need any redhat workaround... It's always not very pure. They
modify something for their `release feature' (by enlarging some buffer
sizes, etc.)

What has pushed the users to use a more recent development kernel? Just
for new functions or newly supported hardware? No.. Some are for fun, and
some are because some software they use has better functions (or
functionality? :Q) in a newer release. That's what keep the tire running.
If everything goes binary compatible, I think kernel developping circle
(and other OSS circle) would be delayed much more.

> Besides, what would you do with all those sources? You have any idea how much time
> it takes to rebuild a server? Or worse, try to figure out which bits need rebuilding
> and which don't? And in an environment where the maximum downtime acceptable is 0?

Don't have any idea, even though I've build most pieces of my Linux system
manually. Just to rebuild it is easy, since I've kept more logs when I've
tried to _build_ one. Building it from zero is really tough.. :-)

I don't care of such environment which needs immediate system
installation. First, think about Linux's position. I don't think it's
gonna be an (ugly and evil) replacement of darn Micro$oft Windows.
And if you need an immediate and ugly installation, why not try Redhat?
And if you need to fine-tune a Linux box, never think you can do it
in a very short time. Neither can you do it w/ an NT server.
If you're not a system manager, and your company need some shiny services
provided by Linux, just hire one system manager! Don't think you can get
something gratuitly or by suffering most people and developpers.

.e'osai ko sarji la lojban. ==> 請支持邏輯語言。
co'o mi'e lindjy,min. ==> 再見,我是林哲民。
Fingerprint20 = CE32 D237 02C0 FE31 FEA9 B858 DE8F AE2D D810 F2D9

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/