Re: Debugging COW (copy on write) memory after fork: Is it possibleto dump only the private anonymous memory of a process?

From: Bruno PrÃmont
Date: Sat Apr 06 2013 - 14:22:07 EST


On Fri, 05 April 2013 Vassilis Virvilis <v.virvilis@xxxxxxxxxxxx> wrote:
> Hello, sorry if this is off topic. Just point me to the right direction.
> Please cc me also in the reply.
>
> Question
> --------
>
> Is it possible to dump only the private anonymous memory of a process?

I don't know if that's possible, but from your background you could
probably work around it be mmap()ing the memory you need and once
initialized mark all of that memory read-only (if you mmap very large
chunks you can even benefit from huge-pages).

Any of the forked processes that tried to access the memory would then
get a signal if they ever tried to write to the data (and thus unsharing it)


If you allocate and initialize all of your memory in little malloc()'ed
chunks it's possibly glibc's memory housekeeping that unshares all those
pages over time.

Bruno

> Background
> ----------
>
> I have a process where it reads and it initializes a large portion of
> the memory (around 2.3GB). This memory is effectively read only from
> that point and on. After the initialization I fork the process to
> several children in order to take advantage of the multicore
> architecture of modern cpus. The problem is that finally the program
> ends up requiring number_of_process * 2.3GB memory effectively entering
> swap thrashing and destroying the performance.
>
> Steps so far
> ------------
>
> The first thing I did is to monitor the memory. I found about
> /proc/$pid/smaps and the http://wingolog.org/pub/mem_usage.py.
>
> What happens is the following
>
> The program starts reads from disk and has 2.3GB of private mappings
> The program forks. Immediately the 2.3GB become shared mapping
> between the parent and the child. Excellent so far.
> As the time goes and the child starts performing its tasks the
> shared memory is slowly migrating to the private mappings of each
> process effectively blowing up the memory requirements.
>
> I thought that if I could see (dump) the private mappings of each
> process I could see from the data why the shared mappings are being
> touched so I tried to dump the core with gcore and by playing with
> /proc/$pid/coredump_filter like this
>
> echo 0x1 > /proc/$pid/coredump_filter
> gcore $pid
>
> Unfortunately it always dumps 2.3GB despite the setting in
> /proc/$pid/coredump_filter which says private anonymous mappings.
>
> I have researched the question in google.
>
> I even posted it in stack overflow.
>
> Any other ideas?
>
> Thanks in advance
>
> Vassilis Virvilis
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/