[PATCH 1/7] crypto: Fixes uninitialized skcipher_walk use in sm4_aesni_avx_glue

From: Yuran Pereira
Date: Thu Nov 02 2023 - 00:10:27 EST


In the following functions:
- `sm4_avx_ctr_crypt`
- `sm4_avx_cfb_decrypt`
- `sm4_cfb_encrypt`
- `sm4_cbc_encrypt`
- `sm4_avx_cbc_decrypt`
- `ecb_do_crypt`

`struct skcipher_walk *walk` is not fully initialized before its use.

Although the call to `skcipher_walk_virt()` and subsequent functions
that this function calls seem to initialize some fields of this struct,

there is a chance that `skcipher_walk_virt()` returns
without fully clearing or properly initializing the `->flags` field
which means that the following flags:
`SKCIPHER_WALK_DIFF`, `SKCIPHER_WALK_COPY`, `SKCIPHER_WALK_SLOW`
could be storing junk values by the time `skcipher_walk_done()` is called.

This could lead to buggy or undefined behaviour since these flags
are checked in `skcipher_walk_done()`:

```C
int skcipher_walk_done(struct skcipher_walk *walk, int err)
{
...
if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
SKCIPHER_WALK_SLOW |
SKCIPHER_WALK_COPY |
SKCIPHER_WALK_DIFF)))) {
...
}
```

To prevent this, this patch ensures that instances of `struct skcipher_walk`
are correctly initialized prior to their use.

Addresses-Coverity-IDs: 1491520, 1491533, 1491610, 1491651, 1491715,
1491774 ("Unintialized scalar variable")
Signed-off-by: Yuran Pereira <yuran.pereira@xxxxxxxxxxx>
---
arch/x86/crypto/sm4_aesni_avx_glue.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/x86/crypto/sm4_aesni_avx_glue.c b/arch/x86/crypto/sm4_aesni_avx_glue.c
index 7800f77d68ad..4117c6f787e2 100644
--- a/arch/x86/crypto/sm4_aesni_avx_glue.c
+++ b/arch/x86/crypto/sm4_aesni_avx_glue.c
@@ -11,6 +11,7 @@
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/kernel.h>
+#include <linux/string.h>
#include <asm/simd.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
@@ -44,6 +45,7 @@ static int ecb_do_crypt(struct skcipher_request *req, const u32 *rkey)
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
@@ -98,6 +100,7 @@ int sm4_cbc_encrypt(struct skcipher_request *req)
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
@@ -132,6 +135,7 @@ int sm4_avx_cbc_decrypt(struct skcipher_request *req,
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
@@ -196,6 +200,7 @@ int sm4_cfb_encrypt(struct skcipher_request *req)
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
@@ -238,6 +243,7 @@ int sm4_avx_cfb_decrypt(struct skcipher_request *req,
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
@@ -307,6 +313,7 @@ int sm4_avx_ctr_crypt(struct skcipher_request *req,
unsigned int nbytes;
int err;

+ memset(&walk, 0, sizeof(walk));
err = skcipher_walk_virt(&walk, req, false);

while ((nbytes = walk.nbytes) > 0) {
--
2.25.1