SA_NODEFER semantics (Linux vs. Solaris)

Elgin Lee (ehl@funghi.com)
Sun, 18 Oct 1998 12:14:52 -0700


A question on signal-handling semantics. Linux seems to handle the
SA_NODEFER sigaction flag differently from Solaris 2.6. Is this
a bug or intentional difference is SA_NODEFER semantics?

Specifically, SA_NODEFER causes Solaris to bypass automatic masking of
the signal being handled. However, the sa_mask value is still
observed.

Under Linux, however, SA_NODEFER causes the kernel to bypass masking
of both the signal being handled and the signals specified by the
sa_mask value. I've tested this under 2.0.35. I'm not (yet) running
development kernels, but it looks like the 2.1.125 kernel seems to
behave the same way (based on a cursory code inspection).

The Solaris SA_NODEFER semantics are consistent with the SVR4
documentation, as well as the Linux sigaction() man page. So the
Linux semantics seem different from not only SVR4 but also the
Linux documentation itself.

I tried a simple-minded 2.0.35-based patch (x86-only, attached below),
which implements the same semantics as Solaris. However, SA_NODEFER
is aliased to SA_NOMASK under Linux--so I'm not sure if this will
break programs (are there any?) using SA_NOMASK and expecting the
current semantics.

--Elgin

--- arch/i386/kernel/signal.c.orig Sat Oct 17 20:27:09 1998
+++ arch/i386/kernel/signal.c Sat Oct 17 20:28:22 1998
@@ -251,8 +251,9 @@

if (sa->sa_flags & SA_ONESHOT)
sa->sa_handler = NULL;
+ current->blocked |= sa->sa_mask & _BLOCKABLE;
if (!(sa->sa_flags & SA_NOMASK))
- current->blocked |= (sa->sa_mask | _S(signr)) & _BLOCKABLE;
+ current->blocked |= _S(signr) & _BLOCKABLE;
}

/*

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/