[PATCH v3] ring-buffer: Prevent inconsistent operation on cpu_buffer->resize_disabled

From: Tze-nan Wu
Date: Mon Apr 10 2023 - 03:35:48 EST


Write to buffer_size_kb can permanently fail, due to cpu_online_mask may
changed between two for_each_online_buffer_cpu loops.
The number of increasing and decreasing on cpu_buffer->resize_disable
may be inconsistent, leading that the resize_disabled in some CPUs
becoming none zero after ring_buffer_reset_online_cpus return.

This issue can be reproduced by "echo 0 > trace" while hotplugging cpu.
After reproducing success, we can find out buffer_size_kb will not be
functional anymore.

Prevent the two "loops" in this function from iterating through different
online cpus by copying cpu_online_mask at the entry of the function.

Changes from v1 to v3:
Declare the cpumask variable statically rather than dynamically.

Changes from v2 to v3:
Considering holding cpu_hotplug_lock too long because of the
synchronize_rcu(), maybe it's better to prevent the issue by copying
cpu_online_mask at the entry of the function as V1 does, instead of
using cpus_read_lock().

Link: https://lore.kernel.org/lkml/20230408052226.25268-1-Tze-nan.Wu@xxxxxxxxxxxx/
Link: https://lore.kernel.org/oe-kbuild-all/202304082051.Dp50upfS-lkp@xxxxxxxxx/
Link: https://lore.kernel.org/oe-kbuild-all/202304081615.eiaqpbV8-lkp@xxxxxxxxx/

Cc: stable@xxxxxxxxxxxxxxx
Cc: npiggin@xxxxxxxxx
Fixes: b23d7a5f4a07 ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU")
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Reviewed-by: Cheng-Jui Wang <cheng-jui.wang@xxxxxxxxxxxx>
Signed-off-by: Tze-nan Wu <Tze-nan.Wu@xxxxxxxxxxxx>
---
kernel/trace/ring_buffer.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 76a2d91eecad..dc758930dacb 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -288,9 +288,6 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data);
#define for_each_buffer_cpu(buffer, cpu) \
for_each_cpu(cpu, buffer->cpumask)

-#define for_each_online_buffer_cpu(buffer, cpu) \
- for_each_cpu_and(cpu, buffer->cpumask, cpu_online_mask)
-
#define TS_SHIFT 27
#define TS_MASK ((1ULL << TS_SHIFT) - 1)
#define TS_DELTA_TEST (~TS_MASK)
@@ -5353,12 +5350,19 @@ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
{
struct ring_buffer_per_cpu *cpu_buffer;
+ cpumask_t reset_online_cpumask;
int cpu;

+ /*
+ * Record cpu_online_mask here to make sure we iterate through the same
+ * online CPUs in the following two loops.
+ */
+ cpumask_copy(&reset_online_cpumask, cpu_online_mask);
+
/* prevent another thread from changing buffer sizes */
mutex_lock(&buffer->mutex);

- for_each_online_buffer_cpu(buffer, cpu) {
+ for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) {
cpu_buffer = buffer->buffers[cpu];

atomic_inc(&cpu_buffer->resize_disabled);
@@ -5368,7 +5372,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
/* Make sure all commits have finished */
synchronize_rcu();

- for_each_online_buffer_cpu(buffer, cpu) {
+ for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) {
cpu_buffer = buffer->buffers[cpu];

reset_disabled_cpu_buffer(cpu_buffer);
--
2.18.0