Re: [PATCH] sched: __fatal_signal_pending() should also check PF_EXITING

From: Tycho Andersen
Date: Wed Jul 27 2022 - 14:57:00 EST


On Wed, Jul 27, 2022 at 11:32:08AM -0500, Eric W. Biederman wrote:
> Tycho Andersen <tycho@tycho.pizza> writes:
>
> > Hi all,
> >
> > On Wed, Jul 20, 2022 at 08:54:59PM -0500, Serge E. Hallyn wrote:
> >> Oh - I didn't either - checking the sigkill in shared signals *seems*
> >> legit if they can be put there - but since you posted the new patch I
> >> assumed his reasoning was clear to you. I know Eric's busy, cc:ing Oleg
> >> for his interpretation too.
> >
> > Any thoughts on this?
>
> Having __fatal_signal_pending check SIGKILL in shared signals is
> completely and utterly wrong.
>
> What __fatal_signal_pending reports is if a signal has gone through
> short cirucuit delivery after determining that the delivery of the
> signal will terminate the process.

This short-circuiting you're talking about happens in __send_signal()?
The problem here is that __send_signal() will add things to the shared
queue:

pending = (type != PIDTYPE_PID) ? &t->signal->shared_pending : &t->pending;

and indeed we add it to the shared set because of the way
zap_pid_ns_processes() calls it:

roup_send_sig_info(SIGKILL, SEND_SIG_PRIV, task, PIDTYPE_MAX);

> Using "sigismember(&tsk->pending.signal, SIGKILL)" to report that a
> fatal signal has experienced short circuit delivery is a bit of an
> abuse, but essentially harmless as tkill of SIGKILL to a thread will
> result in every thread in the process experiencing short circuit
> delivery of the fatal SIGKILL. So a pending SIGKILL can't really mean
> anything else.

This is the part I don't follow. If it's ok to send a signal to this
set, why is it not ok to also look there (other than that it was a
slight hack in the first place)? Maybe it will short circuit
more threads, but that seems ok.

> After having looked at the code a little more I can unfortunately also
> say that testing PF_EXITING in __fatal_signal_pending will cause
> kernel_wait4 in zap_pid_ns_processes to not sleep, and instead to return
> 0. Which will cause zap_pid_ns_processes to busy wait. That seems very
> unfortunate.
>
> I hadn't realized it at the time I wrote zap_pid_ns_processes but I
> think anything called from do_exit that cares about signal pending state
> is pretty much broken and needs to be fixed.

> So the question is how do we fix the problem in fuse that shows up
> during a pid namespace exit without having interruptible sleeps we need
> to wake up?
>
> What are the code paths that experience the problem?

[<0>] request_wait_answer+0x282/0x710 [fuse]
[<0>] fuse_simple_request+0x502/0xc10 [fuse]
[<0>] fuse_flush+0x431/0x630 [fuse]
[<0>] filp_close+0x96/0x120
[<0>] put_files_struct+0x15c/0x2c0
[<0>] do_exit+0xa00/0x2450
[<0>] do_group_exit+0xb2/0x2a0
[<0>] get_signal+0x1eed/0x2090
[<0>] arch_do_signal_or_restart+0x89/0x1bc0
[<0>] exit_to_user_mode_prepare+0x11d/0x1b0
[<0>] syscall_exit_to_user_mode+0x19/0x50
[<0>] do_syscall_64+0x50/0x90
[<0>] entry_SYSCALL_64_after_hwframe+0x46/0xb0

is the full call stack, I have a reproducer here (make check will run
it): https://github.com/tych0/kernel-utils/tree/master/fuse2

In addition to fuse, it looks like nfs_file_flush() eventually ends up
in __fatal_signal_pending(), and probably a few others that want to
synchronize with something outside the local kernel.

> Will refactoring zap_pid_ns_processes as I have proposed so that it does
> not use kernel_wait4 help sort this out? AKA make it work something
> like thread group leader of a process and not allow wait to reap the
> init process of a pid namespace until all of the processes in a pid
> namespaces have been gone. Not that I see the problem in using
> kernel_wait4 it looks like zap_pid_ns_processes needs to stop calling
> kernel_wait4 regardless of the fuse problem.

I can look at this, but I really don't think it will help: in this
brave new world, what wakes up tasks stuck like the above? They're
still looking at the wrong signal set.

Tycho