Re: [GIT PULL] tracing: user_event fix for 6.4

From: Linus Torvalds
Date: Sun Jun 25 2023 - 19:05:00 EST


On Sat, 24 Jun 2023 at 10:51, Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> On Fri, 23 Jun 2023 22:09:59 -0400
> Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> > The code that does the write is basically something being monitored by
> > something else that will tell it to start writing. But that something
> > else could have the buffer disabled for writes. The use case for this
> > is to disable the buffer, enable all the trace events you care about,
> > and then enabled the buffer which is the same as enabling all events at
> > the same time to get a more coherent trace.
>
> Are you fine with the above explanation?

Honestly, I still think that returning zero is completely crazy.

It's exactly the wrong thing to do.

"write()" returning 0 is only normal if the passed-in length was zero
to begin with, and then it can actually have special meaning (ie
zero-sized writes have historically been used for STREAMS etc to do
EOF signalling to the other side, and/or are used to detect pending
errors).

There is one "return zero" case I know about, and it's the "disk full"
signalling (kind of EOF) for block devices.

If you are reading the man-pages, and your eyes found this:

"In the absence of errors, or if error detection is not performed,
the write() function shall return zero and have no other results"

I'd like to point out that that is exactly the "pass in a zero length
to write" case. That verbiage is all about "write(fd, buf, 0)"
possibly returning zero _or_ an error if an error was pending (and was
detected).

So returning zero from a write is basically insanity. It's not a valid
error case.

And it's a *dangerous* return value, because if you have some write
wrapper that is trying to deal with partial writes (think pipes etc),
I would not be surprised if there are cases of that just looping
forever on a zero return.

In fact, I went to Debian code search, and looked for "= write(" as a pattern.

The VERY FIRST hit on Debian Code search was this from LibreOffice:

bool safeWrite(int fd, void* data, std::size_t dataSize)
{
auto nToWrite = dataSize;
unsigned char* dataToWrite = static_cast<unsigned char *>(data);

while ( nToWrite ) {
auto nWritten = write(fd, dataToWrite, cap_ssize_t(nToWrite));
if ( nWritten < 0 ) {
if ( errno == EINTR )
continue;

return false;

}

assert(nWritten > 0);
nToWrite -= nWritten;
dataToWrite += nWritten;
}

return true;
}

which loops forever if NDEBUG is set, otherwise it will print an error
message and abort.

See why I think returning zero is such a bad bad idea?

There is NO EXPLANATION that makes it ok. Really. It's a huge bug, and
wrong on all possible levels.

The fact that your explanation is "we have that bug in other places"
does *not* make me any happier.

Linus