Re: [PATCH v5] random: defer fast pool mixing to worker

From: Sebastian Andrzej Siewior
Date: Fri Feb 11 2022 - 10:00:56 EST


On 2022-02-11 14:08:07 [+0100], Jason A. Donenfeld wrote:

> +static void mix_interrupt_randomness(struct work_struct *work)
> +{
> + struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
> + unsigned long pool[ARRAY_SIZE(fast_pool->pool)];
> + unsigned int count_snapshot;
> + size_t i;
> +
> + /* Check to see if we're running on the wrong CPU due to hotplug. */
> + migrate_disable();
> + if (fast_pool != this_cpu_ptr(&irq_randomness)) {
> + migrate_enable();
> + /*
> + * If we are unlucky enough to have been moved to another CPU,
> + * then we set our count to zero atomically so that when the
> + * CPU comes back online, it can enqueue work again. The
> + * _release here pairs with the atomic_inc_return_acquire in
> + * add_interrupt_randomness().
> + */
> + atomic_set_release(&fast_pool->count, 0);
> + return;
> + }
> +
> + /*
> + * Copy the pool to the stack so that the mixer always has a
> + * consistent view. It's extremely unlikely but possible that
> + * this 2 or 4 word read is interrupted by an irq, but in case
> + * it is, we double check that count stays the same.
> + */
> + do {
> + count_snapshot = (unsigned int)atomic_read(&fast_pool->count);
> + for (i = 0; i < ARRAY_SIZE(pool); ++i)
> + pool[i] = READ_ONCE(fast_pool->pool[i]);
> + } while (count_snapshot != (unsigned int)atomic_read(&fast_pool->count));

Which what I wrote in the last mail, can't we just have a cmpxchg loop
here?

Sebastian