Re: [RFC 0/4] ZRAM: make it just store the high compression rate page

From: Sergey Senozhatsky
Date: Sun Sep 04 2016 - 23:59:07 EST


Hello,

On (09/05/16 11:18), Minchan Kim wrote:
[..]
> If I understand Sergey's point right, he means there is no gain
> to save memory between before and after.
>
> With your approach, you can prevent unnecessary pageout(i.e.,
> uncompressible page swap out) but it doesn't mean you save the
> memory compared to old so why does your patch decrease the number of
> lowmemory killing?

you are right Minchan, that was exactly my point. every compressed page
that does not end up in huge_object zspage should result in some memory
saving (somewhere in the range from bytes to kilobytes).

> A thing I can imagine is without this feature, zram could be full of
> uncompressible pages so good-compressible page cannot be swapped out.

a good theory.

in general, a selective compression of N first pages that fall under the
given compression limit is not the same as a selective compression of N
"best" compressible pages. so I'm a bit uncertain about the guarantees
that the patch can provide.

let's assume the following case.
- zram compression size limit set to 2400 bytes (only pages smaller than
that will be stored in zsmalloc)
- first K pages to swapout have compression size of 2350 +/- 10%
- next L pages have compression size of 2500 +/- 10%
- last M pages are un-compressible - PAGE_SIZE.
- zram disksize can fit N pages
- N > K + L

so instead of compressing and swapping out K + L pages, you would compress
only K pages, leaving (L + M) * PAGE_SIZE untouched. thus I'd say that we
might have bigger chances of LMK/OOM/etc. in some cases.

-ss