Re: block: DMA alignment of IO buffer allocated from slab

From: Bart Van Assche
Date: Thu Sep 20 2018 - 10:07:36 EST


On 9/19/18 2:15 AM, Ming Lei wrote:
Hi Guys,

Some storage controllers have DMA alignment limit, which is often set via
blk_queue_dma_alignment(), such as 512-byte alignment for IO buffer.

Block layer now only checks if this limit is respected for buffer of
pass-through request,
see blk_rq_map_user_iov(), bio_map_user_iov().

The userspace buffer for direct IO is checked in dio path, see
do_blockdev_direct_IO().
IO buffer from page cache should be fine wrt. this limit too.

However, some file systems, such as XFS, may allocate single sector IO buffer
via slab. Usually I guess kmalloc-512 should be fine to return
512-aligned buffer.
But once KASAN or other slab debug options are enabled, looks this
isn't true any
more, kmalloc-512 may not return 512-aligned buffer. Then data corruption
can be observed because the IO buffer from fs layer doesn't respect the DMA
alignment limit any more.

Follows several related questions:

1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If
yes, is it a stable rule?

2) If it is a rule for kmalloc-N slab to return N-byte aligned buffer,
seems KASAN violates this
rule?

3) If slab can't guarantee to return 512-aligned buffer, how to fix
this data corruption issue?

I don't think that (1) is correct, especially if N is not a power of two. In the skd driver I addressed this problem by using kmem_cache_create() and kmem_cache_alloc() instead of kmalloc(). kmem_cache_create() allows to specify the alignment explicitly.

Bart.