[PATCH] cifs: Fix encryption of cleared, but unset rq_iter data buffers

From: David Howells
Date: Mon Nov 06 2023 - 09:41:08 EST


Hi Damian,

Does the attached fix it for you?

David
---
cifs: Fix encryption of cleared, but unset rq_iter data buffers

Each smb_rqst struct contains two things: an array of kvecs (rq_iov) that
contains the protocol data for an RPC op and an iterator (rq_iter) that
contains the data payload of an RPC op. When an smb_rqst is allocated
rq_iter is it always cleared, but we don't set it up unless we're going to
use it.

The functions that determines the size of the ciphertext buffer that will
be needed to encrypt a request, cifs_get_num_sgs(), assumes that rq_iter is
always initialised - and employs user_backed_iter() to check that the
iterator isn't user-backed. This used to incidentally work, because
->user_backed was set to false because the iterator has never been
initialised, but with commit f1b4cb650b9a0eeba206d8f069fcdc532bfbcd74[1]
which changes user_backed_iter() to determine this based on the iterator
type insted, a warning is now emitted:

WARNING: CPU: 7 PID: 4584 at fs/smb/client/cifsglob.h:2165 smb2_get_aead_req+0x3fc/0x420 [cifs]
...
RIP: 0010:smb2_get_aead_req+0x3fc/0x420 [cifs]
...
crypt_message+0x33e/0x550 [cifs]
smb3_init_transform_rq+0x27d/0x3f0 [cifs]
smb_send_rqst+0xc7/0x160 [cifs]
compound_send_recv+0x3ca/0x9f0 [cifs]
cifs_send_recv+0x25/0x30 [cifs]
SMB2_tcon+0x38a/0x820 [cifs]
cifs_get_smb_ses+0x69c/0xee0 [cifs]
cifs_mount_get_session+0x76/0x1d0 [cifs]
dfs_mount_share+0x74/0x9d0 [cifs]
cifs_mount+0x6e/0x2e0 [cifs]
cifs_smb3_do_mount+0x143/0x300 [cifs]
smb3_get_tree+0x15e/0x290 [cifs]
vfs_get_tree+0x2d/0xe0
do_new_mount+0x124/0x340
__se_sys_mount+0x143/0x1a0

The problem is that rq_iter was never set, so the type is 0 (ie. ITER_UBUF)
which causes user_backed_iter() to return true. The code doesn't
malfunction because it checks the size of the iterator - which is 0.

Fix cifs_get_num_sgs() to ignore rq_iter if its count is 0, thereby
bypassing the warnings.

It might be better to explicitly initialise rq_iter to a zero-length
ITER_BVEC, say, as it can always be reinitialised later.

Fixes: d08089f649a0 ("cifs: Change the I/O paths to use an iterator rather than a page list")
Reported-by: Damian Tometzki <damian@xxxxxxxxxxxxxx>
Link: https://lore.kernel.org/r/ZUfQo47uo0p2ZsYg@xxxxxxxxxxxxxxxx/
Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
cc: Steve French <smfrench@xxxxxxxxx>
cc: Shyam Prasad N <sprasad@xxxxxxxxxxxxx>
cc: Rohith Surabattula <rohiths.msft@xxxxxxxxx>
cc: Paulo Alcantara <pc@xxxxxxxxxxxxx>
cc: Namjae Jeon <linkinjeon@xxxxxxxxxx>
cc: Tom Talpey <tom@xxxxxxxxxx>
cc: Jeff Layton <jlayton@xxxxxxxxxx>
cc: Eric Biggers <ebiggers@xxxxxxxxxx>
cc: linux-cifs@xxxxxxxxxxxxxxx
cc: linux-fsdevel@xxxxxxxxxxxxxxx
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f1b4cb650b9a0eeba206d8f069fcdc532bfbcd74 [1]
---
fs/smb/client/cifsglob.h | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 02082621d8e0..c70760871606 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -2143,6 +2143,7 @@ static inline int cifs_get_num_sgs(const struct smb_rqst *rqst,
unsigned int len, skip;
unsigned int nents = 0;
unsigned long addr;
+ size_t data_size;
int i, j;

/*
@@ -2158,17 +2159,21 @@ static inline int cifs_get_num_sgs(const struct smb_rqst *rqst,
* rqst[1+].rq_iov[0+] data to be encrypted/decrypted
*/
for (i = 0; i < num_rqst; i++) {
+ data_size = iov_iter_count(&rqst[i].rq_iter);
+
/* We really don't want a mixture of pinned and unpinned pages
* in the sglist. It's hard to keep track of which is what.
* Instead, we convert to a BVEC-type iterator higher up.
*/
- if (WARN_ON_ONCE(user_backed_iter(&rqst[i].rq_iter)))
+ if (data_size &&
+ WARN_ON_ONCE(user_backed_iter(&rqst[i].rq_iter)))
return -EIO;

/* We also don't want to have any extra refs or pins to clean
* up in the sglist.
*/
- if (WARN_ON_ONCE(iov_iter_extract_will_pin(&rqst[i].rq_iter)))
+ if (data_size &&
+ WARN_ON_ONCE(iov_iter_extract_will_pin(&rqst[i].rq_iter)))
return -EIO;

for (j = 0; j < rqst[i].rq_nvec; j++) {
@@ -2184,7 +2189,8 @@ static inline int cifs_get_num_sgs(const struct smb_rqst *rqst,
}
skip = 0;
}
- nents += iov_iter_npages(&rqst[i].rq_iter, INT_MAX);
+ if (data_size)
+ nents += iov_iter_npages(&rqst[i].rq_iter, INT_MAX);
}
nents += DIV_ROUND_UP(offset_in_page(sig) + SMB2_SIGNATURE_SIZE, PAGE_SIZE);
return nents;