[PATCH v3 2/2] make buffer_locked provide an acquire semantics

From: Mikulas Patocka
Date: Sun Jul 31 2022 - 16:43:24 EST


Let's have a look at this piece of code in __bread_slow:
get_bh(bh);
bh->b_end_io = end_buffer_read_sync;
submit_bh(REQ_OP_READ, 0, bh);
wait_on_buffer(bh);
if (buffer_uptodate(bh))
return bh;
Neither wait_on_buffer nor buffer_uptodate contain a memory barrier.
Consequently, if someone calls sb_bread and then reads the buffer data,
the read of buffer data may be executed before wait_on_buffer(bh) on
architectures with weak memory ordering and it may return invalid data.

Fix this bug by changing the function buffer_locked to have the acquire
semantics - so that code that follows buffer_locked cannot be moved before
it.

Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx

Index: linux-2.6/include/linux/buffer_head.h
===================================================================
--- linux-2.6.orig/include/linux/buffer_head.h
+++ linux-2.6/include/linux/buffer_head.h
@@ -120,7 +120,6 @@ static __always_inline int test_clear_bu
BUFFER_FNS(Uptodate, uptodate)
BUFFER_FNS(Dirty, dirty)
TAS_BUFFER_FNS(Dirty, dirty)
-BUFFER_FNS(Lock, locked)
BUFFER_FNS(Req, req)
TAS_BUFFER_FNS(Req, req)
BUFFER_FNS(Mapped, mapped)
@@ -135,6 +134,17 @@ BUFFER_FNS(Meta, meta)
BUFFER_FNS(Prio, prio)
BUFFER_FNS(Defer_Completion, defer_completion)

+static __always_inline void set_buffer_locked(struct buffer_head *bh)
+{
+ set_bit(BH_Lock, &bh->b_state);
+}
+
+static __always_inline int buffer_locked(const struct buffer_head *bh)
+{
+ unsigned long state = smp_load_acquire(&bh->b_state);
+ return test_bit(BH_Lock, &state);
+}
+
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)

/* If we *know* page->private refers to buffer_heads */