Re: [PATCH 1/5] mm, shmem: Add shmem resident memory accounting

From: Hugh Dickins
Date: Fri Aug 01 2014 - 01:03:13 EST


On Tue, 22 Jul 2014, Jerome Marchand wrote:

> Currently looking at /proc/<pid>/status or statm, there is no way to
> distinguish shmem pages from pages mapped to a regular file (shmem
> pages are mapped to /dev/zero), even though their implication in
> actual memory use is quite different.
> This patch adds MM_SHMEMPAGES counter to mm_rss_stat. It keeps track of
> resident shmem memory size. Its value is exposed in the new VmShm line
> of /proc/<pid>/status.

I like adding this info to /proc/<pid>/status - thank you -
but I think you can make the patch much better in a couple of ways.

>
> Signed-off-by: Jerome Marchand <jmarchan@xxxxxxxxxx>
> ---
> Documentation/filesystems/proc.txt | 2 ++
> arch/s390/mm/pgtable.c | 2 +-
> fs/proc/task_mmu.c | 9 ++++++---
> include/linux/mm.h | 7 +++++++
> include/linux/mm_types.h | 7 ++++---
> kernel/events/uprobes.c | 2 +-
> mm/filemap_xip.c | 2 +-
> mm/memory.c | 37 +++++++++++++++++++++++++++++++------
> mm/rmap.c | 8 ++++----
> 9 files changed, 57 insertions(+), 19 deletions(-)
>
> diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
> index ddc531a..1c49957 100644
> --- a/Documentation/filesystems/proc.txt
> +++ b/Documentation/filesystems/proc.txt
> @@ -171,6 +171,7 @@ read the file /proc/PID/status:
> VmLib: 1412 kB
> VmPTE: 20 kb
> VmSwap: 0 kB
> + VmShm: 0 kB
> Threads: 1
> SigQ: 0/28578
> SigPnd: 0000000000000000
> @@ -228,6 +229,7 @@ Table 1-2: Contents of the status files (as of 2.6.30-rc7)
> VmLib size of shared library code
> VmPTE size of page table entries
> VmSwap size of swap usage (the number of referred swapents)
> + VmShm size of resident shmem memory

Needs to say that includes mappings of tmpfs, and needs to say that
it's a subset of VmRSS. Better placed immediately after VmRSS...

...but now that I look through what's in /proc/<pid>/status, it appears
that we have to defer to /proc/<pid>/statm to see MM_FILEPAGES (third
field) and MM_ANONPAGES (subtract third field from second field).

That's not a very friendly interface. If you're going to help by
exposing MM_SHMPAGES separately, please help even more by exposing
VmFile and VmAnon here in /proc/<pid>/status too.

VmRSS, VmAnon, VmShm, VmFile? I'm not sure what's the best order:
here I'm thinking that anon comes before file in /proc/meminfo, and
shm should be halfway between anon and file. You may have another idea.

And of course the VmFile count here should exclude VmShm: I think it
will work out least confusingly if you account MM_FILEPAGES separately
from MM_SHMPAGES, but add them together where needed e.g. for statm.

> Threads number of threads
> SigQ number of signals queued/max. number for queue
> SigPnd bitmap of pending signals for the thread
> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
> index 37b8241..9fe31b0 100644
> --- a/arch/s390/mm/pgtable.c
> +++ b/arch/s390/mm/pgtable.c
> @@ -612,7 +612,7 @@ static void gmap_zap_swap_entry(swp_entry_t entry, struct mm_struct *mm)
> if (PageAnon(page))
> dec_mm_counter(mm, MM_ANONPAGES);
> else
> - dec_mm_counter(mm, MM_FILEPAGES);
> + dec_mm_file_counters(mm, page);
> }

That is a recurring pattern: please try putting

static inline int mm_counter(struct page *page)
{
if (PageAnon(page))
return MM_ANONPAGES;
if (PageSwapBacked(page))
return MM_SHMPAGES;
return MM_FILEPAGES;
}

in include/linux/mm.h.

Then dec_mm_counter(mm, mm_counter(page)) here, and wherever you can,
use mm_counter(page) to simplify the code throughout.

I say "try" because I think factoring out mm_counter() will simplify
the most code, given the profusion of different accessors, particularly
in mm/memory.c. But I'm not sure how much bloat having it as an inline
function will add, versus how much overhead it would add if not inline.

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/