[PATCH 2/2] ipc/util.c: use idr_alloc_cyclic() for ipc allocations

From: Manfred Spraul
Date: Sat Sep 29 2018 - 09:43:28 EST


A bit related to the patch that increases IPC_MNI, and
partially based on the mail from willy@xxxxxxxxxxxxx:

(User space) id reuse create the risk of data corruption:

Process A: calls ipc function
Process A: sleeps just at the beginning of the syscall
Process B: Frees the ipc object (i.e.: calls ...ctl(IPC_RMID)
Process B: Creates a new ipc object (i.e.: calls ...get())
<If new object and old object have the same id>
Process A: is woken up, and accesses the new object

To reduce the probability that the new and the old object have the
same id, the current implementation adds a sequence number to the
index of the object in the idr tree.

To further reduce the probability for a reuse, perform a cyclic
allocation, and increase the sequence number only when there is
a wrap-around. Unfortunately, idr_alloc_cyclic cannot be used,
because the sequence number must be increased when a wrap-around
occurs.

The patch cycles over at least RADIX_TREE_MAP_SIZE, i.e.
if there is only a small number of objects, the accesses
continue to be direct.

Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
---
ipc/util.c | 48 ++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 44 insertions(+), 4 deletions(-)

diff --git a/ipc/util.c b/ipc/util.c
index 07ae117ccdc0..fa7b8fa7a14c 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -216,10 +216,49 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
*/

if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
- new->seq = ids->seq++;
- if (ids->seq > IPCID_SEQ_MAX)
- ids->seq = 0;
- idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
+ int idx_max;
+
+ /*
+ * If a user space visible id is reused, then this creates a
+ * risk for data corruption. To reduce the probability that
+ * a number is reused, three approaches are used:
+ * 1) the idr index is allocated cyclically.
+ * 2) the use space id is build by concatenating the
+ * internal idr index with a sequence number.
+ * 3) The sequence number is only increased when the index
+ * wraps around.
+ * Note that this code cannot use idr_alloc_cyclic:
+ * new->seq must be set before the entry is inserted in the
+ * idr.
+ */
+ idx_max = ids->in_use*2;
+ if (idx_max < RADIX_TREE_MAP_SIZE)
+ idx_max = RADIX_TREE_MAP_SIZE;
+ if (idx_max > ipc_mni)
+ idx_max = ipc_mni;
+
+ if (ids->ipcs_idr.idr_next <= idx_max) {
+ new->seq = ids->seq;
+ idx = idr_alloc(&ids->ipcs_idr, new,
+ ids->ipcs_idr.idr_next,
+ idx_max, GFP_NOWAIT);
+ }
+
+ if ((idx == -ENOSPC) && (ids->ipcs_idr.idr_next > 0)) {
+ /*
+ * A wrap around occurred.
+ * Increase ids->seq, update new->seq
+ */
+ ids->seq++;
+ if (ids->seq > IPCID_SEQ_MAX)
+ ids->seq = 0;
+ new->seq = ids->seq;
+
+ idx = idr_alloc(&ids->ipcs_idr, new, 0, idx_max,
+ GFP_NOWAIT);
+ }
+ if (idx >= 0)
+ ids->ipcs_idr.idr_next = idx+1;
} else {
new->seq = ipcid_to_seqx(next_id);
idx = idr_alloc(&ids->ipcs_idr, new, ipcid_to_idx(next_id),
@@ -227,6 +266,7 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
}
if (idx >= 0)
new->id = (new->seq << IPCMNI_SEQ_SHIFT) + idx;
+
return idx;
}

--
2.17.2


--------------6C63EC410272F8E0E54722AB--