RE: Can Kernel Compression Code make Sense?

Daniel Egger (Daniel.Egger@t-online.de)
Mon, 23 Feb 1998 00:16:55 +0000


On Sun, 22 Feb 1998, "Michael Herf" wrote:

>Any fast RLE-like compression algorithm will run at memory-read bandwidth on
>modern machines. (About 80 MB/sec.)

Hm, image a P5-200 System doing about 80 MIPS... I wonder how it
would be possible to compress 80MB/sec... I think even decompression
won't be possible at that speed... and that's only a part of the work;
the compressed pages still have to be written to disk...

> Even LZO (a PD fast compression technique) claims a significant fraction of
> memcpy() -- like 1/2, with much >better compression than RLE.

Huh? A half of the memcpy's throughput? Have you got an algo doing that?

>On another level, I wonder how important CPU usage is in this case. i.e. If
>the system is getting no work done because it's waiting for pages to be
>swapped, saving CPU cycles may not accomplish much, but that's probably a
>faulty line of reasoning.

In a multitasking system there are often more than one running process...
and they still need cpucycles even if there is a program that need a page
out of swap...

--

Servus, Daniel

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu