Re: [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks

From: Peter Zijlstra
Date: Wed Feb 17 2016 - 11:02:29 EST


On Wed, Feb 17, 2016 at 10:56:10AM -0500, Waiman Long wrote:
> >>+/**
> >>+ * for_all_percpu_list_entries - iterate over all the per-cpu list with locking
> >>+ * @pos: the type * to use as a loop cursor for the current .
> >>+ * @next: an internal type * variable pointing to the next entry
> >>+ * @pchead: an internal struct list * of percpu list head
> >>+ * @pclock: an internal variable for the current per-cpu spinlock
> >>+ * @head: the head of the per-cpu list
> >>+ * @member: the name of the per-cpu list within the struct
> >>+ */
> >>+#define for_all_percpu_list_entries(pos, next, pchead, pclock, head, member)\
> >>+ { \
> >>+ int cpu; \
> >>+ for_each_possible_cpu (cpu) { \
> >>+ typeof(*pos) *next; \
> >>+ spinlock_t *pclock = per_cpu_ptr(&(head)->lock, cpu); \
> >>+ struct list_head *pchead =&per_cpu_ptr(head, cpu)->list;\
> >>+ spin_lock(pclock); \
> >>+ list_for_each_entry_safe(pos, next, pchead, member.list)
> >>+
> >>+#define end_all_percpu_list_entries(pclock) spin_unlock(pclock); } }

> I will try to shorten the name and better document the macro. This is
> probably the most tricky part of the whole part.

Note that your use of _safe() here actually makes the usage in
iterate_bdevs() unsafe!

Because that function relies on __iget() pinning the current position,
which means that once you re-acquire the list lock, pos->next is valid.

Howveer, _safe() takes pos->next before dropping the lock, and that
object is not actually pinned and can go away.