Re: SIGIO with si_code==POLL_HUP on read pipe FD with no writers?

From: Jason Vas Dias
Date: Wed Sep 07 2022 - 21:55:07 EST


OK, I raised bug https://bugzilla.kernel.org/show_bug.cgi?id=216458
about this and submitted this patch, which I have tested by
incorporating into the Fedora 36 kernel.spec file build, which
I updated to v5.19.7.

Now, my test program runs fine and the whole Fedora subsystem & GUI
comes up OK under the patched kernel
- so it appears I have not broken anything so far -
I will test more thoroughly with any kernel / POSIX / I/O
test suites I can find tomorrow - can anyone recommend any ?

The patch certainly does not break any POSIX rules AFAICS ,
(rather it tries to honor the ones concerning SIGIO & POLL_HUP,
as described in the latest linux & POSIX manual pages, better ).

I can post the RPMs to my google drive account & share them
if anyone is interested .

Please review kernel bug 216458 & consider including this patch
in a future kernel release - it works well, causes no harm, and
makes I/O programming with SIGIO & pipes much more robust
and straightforward.

Thanks, Best Regards,
Jason Vas Dias

On 05/09/2022, Jason Vas Dias <jason.vas.dias@xxxxxx> wrote:
>
> Good day -
>
> To the last committer & maintainers of 'linux/fs/pipe.c' :
>
> Why isn't a SIGIO signal with POLL_HUP in 'si_code'
> raised on an O_ASYNC, F_SETOWN{,_EX} pid F_SETSIG
> signal owning pipe read file descriptor ?
>
> All that happens when the write end of a pipe is
> closed is that a SIGIO gets raised with the
> (struct siginfo* parameter)->si_code set
> to 1 ( POLL_IN ) , and then
> ioctl( fd, FIONREAD, &sz)
> then returns with sz==0 for that fd ;
> a read() on that fd would then return 0.
>
> Looking at pipe.c, the situation of no pipe writers
> is detected and revents is set to contain EPOLLHUP
> ONLY in pipe_poll(), not pipe_read() .
>
> pipe_read() (in version 5.19) DOES detect the
> no writers situation :
>
> fs/pipe.c, @line 255:
> for (;;) {
> /* Read ->head with a barrier vs post_one_notification() */
> ...
> @line 341:
> if (!pipe->writers)
> break;
> ...
>
> It would be quite easy to add after the pipe_read() loop quits a clause
> as in
> pipe_poll() , @ line 677:
> mask = 0;
> if (filp->f_mode & FMODE_READ) {
> if (!pipe_empty(head, tail))
> mask |= EPOLLIN | EPOLLRDNORM;
> if (!pipe->writers && filp->f_version != pipe->w_counter)
> mask |= EPOLLHUP;
> }
>
> which does something like :
>
> if ( !pipe->writers )
> kill_fasync(&pipe->fasync_readers, SIGIO, POLL_HUP);
>
>
> It is not nice to have to GUESS that just because
> ioctl(fd, FIONREAD, &sz)
> returns with sz==0 immediately after a POLL_IN event,
> that the pipe in fact has no writers, because the
> signal could be blocked when the ioctl call happens.
>
> And if one happens not to try to read 0 bytes from the pipe,
> then one would never know that no writers exist on it, and
> could pause() infinitely waiting for a signal.
> Or why should I have to put the FD into O_NONBLOCK mode
> (which mine was not in) and attempt a read to return
> 0 bytes, when I know 0 bytes are available to read ?
>
> OR, maybe in pipe_write(), @ line 595 where it does :
>
> kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
>
> (which is probably where the FINAL POLL_IN signal originates)
> it could instead do:
> kill_fasync(&pipe->fasync_readers, SIGIO,
> ((ret==0) && (pipe->fasync_writers <= 1))
> ? POLL_HUP
> : POLL_IN
> );
>
> It seems there are several easy ways to fix this and
> I believe that it would make processes wanting to
> read pipes using SIGIO much more robust & simple to code.
>
> Processes would still be able to rely on read()s returning
> 0 in this case, but please, why can't SIGIO using processes
> also get a definitive SIGIO with si_code==POLL_HUP, not POLL_IN ?
>
> There appears to be similar logic that does send
> a final POLL_HUP SIGIO when the remote write end of
> a readable socket closes - why not for pipes ?
>
> And the sigaction manual page states:
> "
> The following values can be placed in si_code for a SIGIO/SIGPOLL
> sig‐
> nal:
>
> POLL_IN
> Data input available.
>
> POLL_OUT
> Output buffers available.
>
> POLL_MSG
> Input message available.
>
> POLL_ERR
> I/O error.
>
> POLL_PRI
> High priority input available.
>
> POLL_HUP
> Device disconnected.
> "
> which suggests that these events should be raised for all devices -
> it does not mention any special cases for pipe file descriptors,
> so readers would reasonably expect a POLL_HUP event to be sent
> on a read end of a pipe with no writers.
>
> Please do something about this -
> or would a patch from me that fixes this ever be
> likely to be considered ?
>
> Thanks for any responses & Best Regards,
> Jason Vas Dias
>
>
>
>
>
>
>
diff -up linux-5.19.7-200.fc36.x86_64/fs/pipe.c.pipe_fd_sigio_poll_hup linux-5.19.7-200.fc36.x86_64/fs/pipe.c
--- linux-5.19.7-200.fc36.x86_64/fs/pipe.c.pipe_fd_sigio_poll_hup 2022-09-05 09:31:36.000000000 +0100
+++ linux-5.19.7-200.fc36.x86_64/fs/pipe.c 2022-09-07 21:09:14.075204459 +0100
@@ -233,7 +233,7 @@ pipe_read(struct kiocb *iocb, struct iov
size_t total_len = iov_iter_count(to);
struct file *filp = iocb->ki_filp;
struct pipe_inode_info *pipe = filp->private_data;
- bool was_full, wake_next_reader = false;
+ bool was_full, empty, wake_next_reader = false;
ssize_t ret;

/* Null read succeeds. */
@@ -382,10 +382,15 @@ pipe_read(struct kiocb *iocb, struct iov
was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
wake_next_reader = true;
}
- if (pipe_empty(pipe->head, pipe->tail))
+
+ if ((empty = pipe_empty(pipe->head, pipe->tail)))
wake_next_reader = false;
+
__pipe_unlock(pipe);

+ if ( empty && pipe->fasync_readers && !pipe->writers )
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_HUP);
+
if (was_full)
wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
if (wake_next_reader)
@@ -592,7 +597,15 @@ out:
*/
if (was_empty || pipe->poll_usage)
wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
- kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+
+ if (pipe->fasync_readers )
+ {
+ if (ret > 0)
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ else if (!pipe->writers)
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_HUP);
+ }
+
if (wake_next_writer)
wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
if (ret > 0 && sb_start_write_trylock(file_inode(filp)->i_sb)) {
@@ -715,6 +728,7 @@ static int
pipe_release(struct inode *inode, struct file *file)
{
struct pipe_inode_info *pipe = file->private_data;
+ unsigned int head, tail;

__pipe_lock(pipe);
if (file->f_mode & FMODE_READ)
@@ -726,8 +740,16 @@ pipe_release(struct inode *inode, struct
if (!pipe->readers != !pipe->writers) {
wake_up_interruptible_all(&pipe->rd_wait);
wake_up_interruptible_all(&pipe->wr_wait);
- kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
- kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+ head = READ_ONCE(pipe->head);
+ tail = READ_ONCE(pipe->tail);
+ if ( pipe_empty(head,tail) && !pipe->writers )
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_HUP);
+ else if (pipe->fasync_readers)
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ if ( (!pipe->readers) && pipe->fasync_writers)
+ kill_fasync(&pipe->fasync_writers, SIGIO, POLL_HUP);
+ else if (pipe->fasync_writers)
+ kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
__pipe_unlock(pipe);