Re: capabilities in elf headers, (my) final (and shortest) iteration

Riley Williams (rhw@BigFoot.Com)
Sun, 18 Apr 1999 00:21:09 +0100 (GMT)


Hi Horst.

>>>>>> 4) Include in the capabilities header a field holding
>>>>>> the timestamp of the last capabilities validation.

>>> Validation means checking against something.

>> Nodz...

>>> If this something resides in the same file, it can be doctored
>>> if the file can be somehow modified. And you explained how
>>> above.

> If the file just holds the capability-validation timestamp, I'd
> just need to make that later than the time the system is booted.

In which case, it would still FAIL the test - didn't you notice that I
very specifically stated that the test was for EQUALITY with the time
of last boot.

> The kernel won't be any wiser.

The kernel will still know that it's seeing the WRONG timestamp, so
will revalidate as required...

> And it also means that my executable checked yesterday won't run
> today because the system rebooted. Or the system checks _all_
> files for capability timestamps on boot.

Neither is required. Each binary gets checked ONCE after EVERY reboot,
which means the checking will have MINIMAL affect on performance.

> If you want to validate, you have to have a _trusted_ source for
> the data against which to check.

Are you saying that the time of last reboot is untrusted? If so, then
there's nothing that can be trusted on your system...

> The file itself might forge whatever you check somehow, so it's
> out.

Please provide a SENSIBLE method by which the file can forge the EXACT
time that the system was last rebooted. So far, you've only managed to
produce a series of examples that all fail the test...

> Only possibility remaining is a fully kernel-controlled data
> structure: Either the filesystem or a kernel-internal structure
> holding the capabilities and the open files in the VMS manner.

>>>>> OK, so capabilities are useful only between reboots now, and
>>>>> have to be set again each time?

>>>> Can you suggest a better way?

>>> The above way just won't work, and no similar strategy will.

>> Any method that does NOT include some form of validation after
>> each boot is inherently insecure, and you can argue all you like
>> against including security, but you stand little chance of having
>> your arguments accepted.

> e3fsck(8), validation in mount(8).

>>> The whole idea of capabilities in the file itself is broken.

>> If true, then the idea of having information on how to load an
>> executable in the file itself is also broken. After all, that's
>> a capability in itself - the capability to execute the file. If
>> the kernel doesn't recognise the header format of the file, it
>> can't execute it...

> Yep. That capability is called "x" bit in Unix.

WRONG !!!

The 'x' bit only says that the file should be regarded as executable,
and provides ZERO information on how to load the executable when the
user asks to run it.

If you don't believe me, try the following:

1. Find a web page on your system. Any web page.

2. Set the 'x' bit on it.

3. Type in the path and name for the relevant file.

4. Watch Linux REFUSE to execute it because it doesn't know how to,
despite the fact that it has the relevant 'x' bit set.

I've just tried that on this system, and it royally failed to 'run',
contrary to your claim...

>> Remember, the ONLY time ANY capability information needs to be
>> checked is when the system has just been asked to execute the
>> file in question as otherwise it's irrelevant. As a result, any
>> argument against reading the contents of the file are
>> non-starters as the file will be read as part of executing it,
>> just to find out what format it's in.

> No. You need to check it _each_ time the system boots with your
> scheme. For _all_ files. Or keep a list of capable files around
> (i.e., VMS).

What on earth for? Why on earth should a file that's never used
between boots be checked to ensure that it can run? I've certainly
never proposed that...

>>> ...and works only for some types of files (how about a webserver
>>> written in Perl?).

>> That's up to the PERL interpreter to handle, not the kernel.
>> After all, the kernel ignores s[gu]id on scripts already, and
>> the only reason s[gu]id works on PERL scripts is because the
>> PERL interpreter handles the relevant capabilities if it sees
>> the s[gu]id flag set on the script it's running.

> Great. "Any Perl script that states `I've got capability A' is
> now capable?! You _seriously_ suggest to trust the Perl (and sh,
> and ...) interpreters?

I don't suggest anything, YOU are the one making that suggestion. Can
I suggest you start by checking how s[gu]id perl scripts CURRENTLY
work before you start accusing me of things I have neither done nor
said...

>> Out of curiosity, are you an idealist, a pragmatist or a realist?
>> Based on the above and your earlier posts, I'd have to assume
>> you're an idealist who's unwilling to accept anything other than a
>> perfect world.

> An idealistic pessimist.

No such thing - a pessimist by definition assumes that an idealist is
always wrong...

> The idea of capabilities was designed to be able to build _more_
> secure systems. To be able to do that in practice, you _have_ to
> play by the capability rules, else the whole system is liable to
> be the victim of as yet undiscovered attacks.

Agreed, and that's what everybody except you appears to be trying to
do. I'm curious why you aren't?

> This means capabilities in filesystem or centrally in the kernel
> a la VMS. Building a halfway-capability system just invites
> disaster when the true and the faked get mixed later on.

I don't believe this necessarily follows from the previous comment.
Yes, the kernel has to be involved, but the idea of 'capabilities in
filesystem' does NOT automatically follow - indeed, I'd say that such
was definately undesirable in most circumstances.

> Any capability system means a lot of rework in the userland, no
> way around it. Any kludge that promises to "fix" that is broken
> by its very objectives.

Who's offered a kludge of any sort? I haven't - all I've offered is a
step on the way there that allows one to explore some possible means
of introducing capabilities into Linux without automatically breaking
anything...

>> As a pragmatist myself, I see the move from s[gu]id to
>> capabilities being of necessity an evolutionary one, starting
>> from the current position of pure s[gu]id and moving through a
>> hybrid scheme such as the above towards a pure capabilities
>> scheme.

> But you have to keep your goal in mind, and be careful not to go
> down a road that doesn't allow you to reach it.

Why do you think I was so careful with my definition of the first step
towards moving to the desired target?

>>> It looks very fragile and complex to me. And all to be able to
>>> continue using current tools that know nothing whatsoever about
>>> special privileges, and so are sure to get it wrong.

>> If one needs SPECIAL tools to handle the BACK-UP of binaries with
>> capabilities, then the capability implementation is by definition
>> broken.

> Other way around: If you can do something like that _without_
> capability aware tools, the system is irrecoverably broken: This
> allows faking and smuggling capabilities.

Nope, that's entirely up to the tools that do the RESTORE stage of the
procedure, and the BACK-UP stage should be completely transparent to
them if it's to have any meaning whatsoever...

>>> Again: Where do we want to go?

>> Preferably to a pure capabilities based system.

> I'm not so sure... I _like_ Unix, and a pure caps system is
> _not_ Unix in any reasonable meaning of the term.

I like the Linux implementation of UNIX, but I do NOT like the version
of UNIX that Tandy supplied under the name XENIX (not to be confused
with IBM's XENIX, which worked differently again).

As far as UNIX in general goes, I've yet to come across anything that
can be even considered a standard when it comes to system security,
other than the idea of having users log in and restrict what they can
do depending on who they are. Every different dialect of UNIX that
I've met does the restricting in a different way to all the others...

>>> And then, How do we get there?

>> Either via a hybrid system or not at all. Take your pick.

> A hybrid scheme that _allows_ us to get there. Or a complete
> rewrite of the userland.

I'm still waiting for you to provide ONE example of why my proposed
step will PREVENT us from getting there.

>>> Broken systems (where old and new kernels run concurrently)
>>> will last a very short time, or none at all.

>> That is an extremely unlikely scenario, considering there are
>> still systems in use running kernels prior to 2.0.0. I suggest you
>> adopt a more realistic stance...

> Show me _one_ system run by a security concisious sysadmin that
> runs 1.2.x and 2.0.x alternatively.

Presumably so you can have a go at cracking them, and them blame it on
me for publicising them. Sorry, I'm not that daft...

> Note that there are _no_ huge security risks involved there!

Not at the moment, no - but if we did capabilities the way YOU want us
to, there would be LOTS of security holes introduced on systems all
over the place, resulting in Linux promptly getting a reputation for
being an extremely insecure system. Sorry, I'm not daft enough to
believe the sort of claims you've been putting out either...

Best wishes from Riley.

+----------------------------------------------------------------------+
| There is something frustrating about the quality and speed of Linux |
| development, ie., the quality is too high and the speed is too high, |
| in other words, I can implement this XXXX feature, but I bet someone |
| else has already done so and is just about to release their patch. |
+----------------------------------------------------------------------+
* ftp://ftp.MemAlpha.cx/pub/rhw/Linux
* http://www.MemAlpha.cx/kernel.versions.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/