Re: [PATCH v3 03/11] documentation: Block Devices Snapshots Module

From: Sergei Shtepa
Date: Thu Apr 20 2023 - 15:18:26 EST




On 4/20/23 16:44, Donald Buczek wrote:
> Subject:
> Re: [PATCH v3 03/11] documentation: Block Devices Snapshots Module
> From:
> Donald Buczek <buczek@xxxxxxxxxxxxx>
> Date:
> 4/20/23, 16:44
>
> To:
> Sergei Shtepa <sergei.shtepa@xxxxxxxxx>, axboe@xxxxxxxxx, hch@xxxxxxxxxxxxx, corbet@xxxxxxx, snitzer@xxxxxxxxxx
> CC:
> viro@xxxxxxxxxxxxxxxxxx, brauner@xxxxxxxxxx, willy@xxxxxxxxxxxxx, kch@xxxxxxxxxx, martin.petersen@xxxxxxxxxx, vkoul@xxxxxxxxxx, ming.lei@xxxxxxxxxx, gregkh@xxxxxxxxxxxxxxxxxxx, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-fsdevel@xxxxxxxxxxxxxxx
>
>
> On 4/19/23 21:42, Donald Buczek wrote:
>> Dear Sergei,
>>
>> On 4/19/23 15:05, Sergei Shtepa wrote:
>>> [...]
>>>
>>> Patches in attach and https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FSergeiShtepa%2Flinux%2Ftree%2Fblksnap-master&data=05%7C01%7Csergei.shtepa%40veeam.com%7Cccc78e2cdf7845c6c0cd08db41add281%7Cba07baab431b49edadd7cbc3542f5140%7C1%7C0%7C638175987085694967%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=RdrWqUwvk7gjfSRYvrPfz2E0%2BIOY6IQxK4xvpzJqcnk%3D&reserved=0
>>
>> Thanks. I can confirm that this fixes the reported problem and I no longer can trigger the UAF. 😄
>>
>> Tested-Bny: Donald Buczek <buczek@xxxxxxxxxxxxx>
>>
>> Maybe you can add me to the cc list for v4 as I'm not subscribed to the lists.
>
>
> Sorry, found another one. Reproducer:
>
> =====
> #! /bin/bash
> set -xe
> modprobe blksnap
> test -e /scratch/local/test.dat || fallocate -l 1G /scratch/local/test.dat
> s=$(blksnap snapshot_create -d /dev/vdb)
> blksnap snapshot_appendstorage -i $s -f /scratch/local/test.dat
> blksnap snapshot_take -i $s
> s2=$(blksnap snapshot_create -d /dev/vdb)
> blksnap snapshot_destroy -i $s2
> blksnap snapshot_destroy -i $s
> =====
>
>
> [20382.402921] blksnap-snapshot: Snapshot ff1c54f1-3e8c-4c99-bb26-35e82dc1c9fa was created
> [20382.535933] blksnap-image: Create snapshot image device for original device [253:16]
> [20382.542405] blksnap-snapshot: Snapshot ff1c54f1-3e8c-4c99-bb26-35e82dc1c9fa was taken successfully
> [20382.572564] blksnap-snapshot: Snapshot 4b2d571d-9a24-419d-96c2-8d64a07c4966 was created
> [20382.600521] blksnap-snapshot: Destroy snapshot 4b2d571d-9a24-419d-96c2-8d64a07c4966
> [20382.602373] blksnap-snapshot: Release snapshot 4b2d571d-9a24-419d-96c2-8d64a07c4966
> [20382.722137] blksnap-snapshot: Destroy snapshot ff1c54f1-3e8c-4c99-bb26-35e82dc1c9fa
> [20382.724033] blksnap-snapshot: Release snapshot ff1c54f1-3e8c-4c99-bb26-35e82dc1c9fa
> [20382.725850] ==================================================================
> [20382.727641] BUG: KASAN: wild-memory-access in snapshot_free+0x73/0x170 [blksnap]
> [20382.729326] Write of size 8 at addr dead000000000108 by task blksnap/8297
> ...

Great! Thanks.

There is no protection against re-adding a block device to the snapshot.
I'll take care of it.

And small update. I have made a correction to the bio allocation algorithm
for saving and loading chunks. Please, see attach and commit.
Link: https://github.com/SergeiShtepa/linux/commit/2628dd193fd3d563d26d5ccc82d35b2e11bbda38
But cases of large chunks or very large disks have not yet been covered
by tests, yet.

I also had concerns that the snapshot writing algorithm was not working
correctly. But the concerns were in vain. The new test is working.
Link: https://github.com/veeam/blksnap/blob/stable-v2.0/tests/6000-snapimage_write.sh
diff --git a/drivers/block/blksnap/chunk.c b/drivers/block/blksnap/chunk.c
index 73113c714ac1..06fdd6c90e0a 100644
--- a/drivers/block/blksnap/chunk.c
+++ b/drivers/block/blksnap/chunk.c
@@ -283,25 +283,26 @@ void chunk_store(struct chunk *chunk)
bio_set_flag(bio, BIO_FILTERED);

while (count) {
+ struct bio *next;
sector_t portion = min_t(sector_t, count, PAGE_SECTORS);
unsigned int bytes = portion << SECTOR_SHIFT;

if (bio_add_page(bio, chunk->diff_buffer->pages[page_idx],
- bytes, 0) != bytes) {
- struct bio *next;
+ bytes, 0) == bytes) {
+ page_idx++;
+ count -= portion;
+ continue;
+ }

- next = bio_alloc_bioset(bdev,
- calc_max_vecs(count),
+ /* Create next bio */
+ next = bio_alloc_bioset(bdev, calc_max_vecs(count),
REQ_OP_WRITE | REQ_SYNC | REQ_FUA,
GFP_NOIO, &chunk_io_bioset);
- next->bi_iter.bi_sector = bio_end_sector(bio);
- bio_set_flag(next, BIO_FILTERED);
- bio_chain(bio, next);
- submit_bio_noacct(bio);
- bio = next;
- }
- page_idx++;
- count -= portion;
+ next->bi_iter.bi_sector = bio_end_sector(bio);
+ bio_set_flag(next, BIO_FILTERED);
+ bio_chain(bio, next);
+ submit_bio_noacct(bio);
+ bio = next;
}

cbio = container_of(bio, struct chunk_bio, bio);
@@ -342,24 +343,26 @@ static struct bio *__chunk_load(struct chunk *chunk)
bio_set_flag(bio, BIO_FILTERED);

while (count) {
+ struct bio *next;
sector_t portion = min_t(sector_t, count, PAGE_SECTORS);
unsigned int bytes = portion << SECTOR_SHIFT;

if (bio_add_page(bio, chunk->diff_buffer->pages[page_idx],
- bytes, 0) != bytes) {
- struct bio *next;
-
- next = bio_alloc_bioset(bdev, calc_max_vecs(count),
- REQ_OP_READ, GFP_NOIO,
- &chunk_io_bioset);
- next->bi_iter.bi_sector = bio_end_sector(bio);
- bio_set_flag(next, BIO_FILTERED);
- bio_chain(bio, next);
- submit_bio_noacct(bio);
- bio = next;
+ bytes, 0) == bytes) {
+ page_idx++;
+ count -= portion;
+ continue;
}
- page_idx++;
- count -= portion;
+
+ /* Create next bio */
+ next = bio_alloc_bioset(bdev, calc_max_vecs(count),
+ REQ_OP_READ, GFP_NOIO,
+ &chunk_io_bioset);
+ next->bi_iter.bi_sector = bio_end_sector(bio);
+ bio_set_flag(next, BIO_FILTERED);
+ bio_chain(bio, next);
+ submit_bio_noacct(bio);
+ bio = next;
}
return bio;
}