Re: [PATCH RFC] 1/2 central workspace for zlib

From: James Morris (jmorris@intercode.com.au)
Date: Fri May 30 2003 - 10:33:28 EST


On Fri, 30 May 2003, Jörn Engel wrote:

> The following creates a central workspace per cpu for the zlib. The
> original idea was to save memory for embedded, but this should also
> improve performance for smp.
>
> Currently, each user of the zlib has to provide a workspace for the
> zlib to draw memory from. Each workspace amounts to 400k for deflate
> or 50k for inflate. Four users exist, as of 2.4.20:
> jffs2: inflate & deflate, initialized once, 450k
> cramfs: inflate, initialized once, 50k
> zisofs: inflate, initialized once, 50k
> ppp: inflate & deflate, per channel, 450k * n

In 2.5 (and soon 2.4), the crypto/deflate.c module makes use of zlib as
well. This is typically used in ipsec for ipcomp. Each ipcomp security
association has a zlib context, and access to the workspace is serialized
by the security association's bh lock.

(Something needs to be done for this particular case in general: a box
with 1000 tunnels could use use 450MB of atomic kernel memory just for
zlib workspaces alone. A solution I've been looking at for this is to
allow workspaces to be dynamically sized based on the compression
parameters instead of using worst-case figures. The memory savings may be
up to 90% in this case).

> This patch creates an extra workspace of 400k per cpu, that is used for
> both inflate and deflate. One of the central workspaces is used for
> users that don't provide their own. Semaphore protection is done in
> zlib_(in|de)flateInit() and zlib_(in|de)flateEnd, so the user has to
> call those functions more often to release the semaphores before
> returning to userspace.

This won't work for the bh lock protected case outlined above, and will
cause contention between different users of zlib.


- James
--
James Morris
<jmorris@xxxxxxxxxxxxxxxx>






-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/