Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

From: Cristian Stoica
Date: Tue Mar 08 2016 - 03:35:49 EST


Hi Tadeusz,

There is also a follow-up in the next paragraph:

"That pretty much sums up the new attack: the side-channel defenses that were hoped to be sufficient were found not to be (again). So the answer, this time I believe, is to make the processing rigorously constant-time."

The author makes new changes and continues instrumenting the code and still finds 20 CPU cycles (out of 18000) difference between medians for different paddings. This small difference was detected also on a timing side-channel - which is the point I'm making.

SSL/TLS is prone to this implementation issue and many user-space libraries got this wrong. It would be good to see some numbers to back-up the claim of timing differences as not being an issue for this one.

Cristian S.


________________________________________
From: Tadeusz Struk <tadeusz.struk@xxxxxxxxx>
Sent: Monday, March 7, 2016 4:31 PM
To: Cristian Stoica; herbert@xxxxxxxxxxxxxxxxxxx
Cc: linux-crypto@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; davem@xxxxxxxxxxxxx
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Cristian,
On 03/07/2016 01:05 AM, Cristian Stoica wrote:
> Hi Tadeusz,
>
>
> +static int crypto_encauth_dgst_verify(struct aead_request *req,
> + unsigned int flags)
> +{
> + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> + unsigned int authsize = crypto_aead_authsize(tfm);
> + struct aead_instance *inst = aead_alg_instance(tfm);
> + struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
> + struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
> + struct crypto_ahash *auth = ctx->auth;
> + struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
> + struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
> + u8 *hash = areq_ctx->tail;
> + int i, err = 0, padd_err = 0;
> + u8 paddlen, *ihash;
> + u8 padd[255];
> +
> + scatterwalk_map_and_copy(&paddlen, req->dst, req->assoclen +
> + req->cryptlen - 1, 1, 0);
> +
> + if (paddlen > 255 || paddlen > req->cryptlen) {
> + paddlen = 1;
> + padd_err = -EBADMSG;
> + }
> +
> + scatterwalk_map_and_copy(padd, req->dst, req->assoclen +
> + req->cryptlen - paddlen, paddlen, 0);
> +
> + for (i = 0; i < paddlen; i++) {
> + if (padd[i] != paddlen)
> + padd_err = -EBADMSG;
> + }
>
>
> This part seems to have the same issue my TLS patch has.
> See for reference what Andy Lutomirski had to say about it:
>
> http://www.mail-archive.com/linux-crypto%40vger.kernel.org/msg11719.html

Thanks for reviewing and for pointing this out. I was aware of the timing-side
issues and done everything I could to avoid it. The main issue that allowed the
Lucky Thirteen attack was that the digest wasn't performed at all if the padding
verification failed. This is not an issue here.
The other issue, which is caused by the length of data to digest being dependent
on the padding length is inevitable and there is nothing we can do about it.
As the note in the paper says:
"However, our behavior matches OpenSSL, so we leak only as much as they do."

Thanks,
--
TS