Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 15 Mar 2018 12:18:58 +0100
From: Salvatore Mesoraca <s.mesoraca16@...il.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-hardening@...ts.openwall.com,
	linux-crypto@...r.kernel.org,
	"David S. Miller" <davem@...emloft.net>,
	Herbert Xu <herbert@...dor.apana.org.au>,
	Kees Cook <keescook@...omium.org>,
	Salvatore Mesoraca <s.mesoraca16@...il.com>,
	Eric Biggers <ebiggers3@...il.com>
Subject: [PATCH v2] crypto: ctr - avoid VLA use

All ciphers implemented in Linux have a block size less than or
equal to 16 bytes and the most demanding hw require 16 bytes
alignment for the block buffer.
We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bytes
alignment, unless the architecture supports efficient unaligned
accesses.
We also check the selected cipher at instance creation time, if
it doesn't comply with these limits, we fail the creation.

[1] https://lkml.org/lkml/2018/3/7/621

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@...il.com>
---
 crypto/ctr.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/crypto/ctr.c b/crypto/ctr.c
index 854d924..2c9f80f 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -21,6 +21,14 @@
 #include <linux/scatterlist.h>
 #include <linux/slab.h>
 
+#define MAX_BLOCKSIZE 16
+
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+#define MAX_ALIGNMASK 15
+#else
+#define MAX_ALIGNMASK 0
+#endif
+
 struct crypto_ctr_ctx {
 	struct crypto_cipher *child;
 };
@@ -58,7 +66,7 @@ static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
 	unsigned int bsize = crypto_cipher_blocksize(tfm);
 	unsigned long alignmask = crypto_cipher_alignmask(tfm);
 	u8 *ctrblk = walk->iv;
-	u8 tmp[bsize + alignmask];
+	u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK];
 	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
 	u8 *src = walk->src.virt.addr;
 	u8 *dst = walk->dst.virt.addr;
@@ -106,7 +114,7 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
 	unsigned int nbytes = walk->nbytes;
 	u8 *ctrblk = walk->iv;
 	u8 *src = walk->src.virt.addr;
-	u8 tmp[bsize + alignmask];
+	u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK];
 	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
 
 	do {
@@ -206,6 +214,14 @@ static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
 	if (alg->cra_blocksize < 4)
 		goto out_put_alg;
 
+	/* Block size must be <= MAX_BLOCKSIZE. */
+	if (alg->cra_blocksize > MAX_BLOCKSIZE)
+		goto out_put_alg;
+
+	/* Alignmask must be <= MAX_ALIGNMASK. */
+	if (alg->cra_alignmask > MAX_ALIGNMASK)
+		goto out_put_alg;
+
 	/* If this is false we'd fail the alignment of crypto_inc. */
 	if (alg->cra_blocksize % 4)
 		goto out_put_alg;
-- 
1.9.1

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.