Re: DAMON VA regions don't split on an large Android APP

From: sj
Date: Tue Apr 26 2022 - 20:21:37 EST


Hello Barry,


Thank you so much for sharing your great findings! :)

On Wed, 27 Apr 2022 11:19:23 +1200 Barry Song <21cnbao@xxxxxxxxx> wrote:

> Hi SeongJae & Andrew,
> (also Cc-ed main damon developers)
> On an Android phone, I tried to use the DAMON vaddr monitor and found
> that vaddr regions don't split well on large Android Apps though
> everything works well on native Apps.
>
> I have tried the below two cases on an Android phone with 12GB memory
> and snapdragon 888 CPU.
> 1. a native program with small memory working set as below,
> #define size (1024*1024*100)
> main()
> {
> volatile int *p = malloc(size);
> memset(p, 0x55, size);
>
> while(1) {
> int i;
> for (i = 0; i < size / 4; i++)
> (void)*(p + i);
> usleep(1000);
>
> for (i = 0; i < size / 16; i++)
> (void)*(p + i);
> usleep(1000);
>
> }
> }
> For this application, the Damon vaddr monitor works very well.
> I have modified monitor.py in the damo userspace tool a little bit to
> show the raw data getting from the kernel.
> Regions can split decently on this kind of applications, a typical raw
> data is as below,
>
> monitoring_start: 2.224 s
> monitoring_end: 2.329 s
> monitoring_duration: 104.336 ms
> target_id: 0
> nr_regions: 24
> 005fb37b2000-005fb734a000( 59.594 MiB): 0
> 005fb734a000-005fbaf95000( 60.293 MiB): 0
> 005fbaf95000-005fbec0b000( 60.461 MiB): 0
> 005fbec0b000-005fc2910000( 61.020 MiB): 0
> 005fc2910000-005fc6769000( 62.348 MiB): 0
> 005fc6769000-005fca33f000( 59.836 MiB): 0
> 005fca33f000-005fcdc8b000( 57.297 MiB): 0
> 005fcdc8b000-005fd115a000( 52.809 MiB): 0
> 005fd115a000-005fd45bd000( 52.387 MiB): 0
> 007661c59000-007661ee4000( 2.543 MiB): 2
> 007661ee4000-0076623e4000( 5.000 MiB): 3
> 0076623e4000-007662837000( 4.324 MiB): 2
> 007662837000-0076630f1000( 8.727 MiB): 3
> 0076630f1000-007663494000( 3.637 MiB): 2
> 007663494000-007663753000( 2.746 MiB): 1
> 007663753000-007664251000( 10.992 MiB): 3
> 007664251000-0076666fd000( 36.672 MiB): 2
> 0076666fd000-007666e73000( 7.461 MiB): 1
> 007666e73000-007667c89000( 14.086 MiB): 2
> 007667c89000-007667f97000( 3.055 MiB): 0
> 007667f97000-007668112000( 1.480 MiB): 1
> 007668112000-00766820f000(1012.000 KiB): 0
> 007ff27b7000-007ff27d6000( 124.000 KiB): 0
> 007ff27d6000-007ff27d8000( 8.000 KiB): 8
>
> 2. a large Android app like Asphalt 9
> For this case, basically regions can't split very well, but monitor
> works on small vma:
>
> monitoring_start: 2.220 s
> monitoring_end: 2.318 s
> monitoring_duration: 98.576 ms
> target_id: 0
> nr_regions: 15
> 000012c00000-0001c301e000( 6.754 GiB): 0
> 0001c301e000-000371b6c000( 6.730 GiB): 0
> 000371b6c000-000400000000( 2.223 GiB): 0
> 005c6759d000-005c675a2000( 20.000 KiB): 0
> 005c675a2000-005c675a3000( 4.000 KiB): 3
> 005c675a3000-005c675a7000( 16.000 KiB): 0
> 0072f1e14000-0074928d4000( 6.510 GiB): 0
> 0074928d4000-00763c71f000( 6.655 GiB): 0
> 00763c71f000-0077e863e000( 6.687 GiB): 0
> 0077e863e000-00798e214000( 6.590 GiB): 0
> 00798e214000-007b0e48a000( 6.002 GiB): 0
> 007b0e48a000-007c62f00000( 5.323 GiB): 0
> 007c62f00000-007defb19000( 6.199 GiB): 0
> 007defb19000-007f794ef000( 6.150 GiB): 0
> 007f794ef000-007fe8f53000( 1.745 GiB): 0
>
> As you can see, we have some regions which are very very big and they
> are losing the chance to be splitted. But
> Damon can still monitor memory access for those small VMA areas very well like:
> 005c675a2000-005c675a3000( 4.000 KiB): 3

In short, DAMON doesn't set regions based on VMA but access pattern, and
therefore this looks not a problem.

DAMON allows users set min/max monitoring overhead limit and provides a best
accuracy under the condition. In detail, users are allowed to set the min/max
monitoring regions as DAMON's monitoring overhead is proportional to the number
of regions. DAMON provides best effort accuracy under the condition by
splitting and merging regions so that pages in each region has different access
frequency.

The default min number of regions is 10. I believe that's why there are many 6
GiB regions.

If we don't see small regions having some non-zero access frequency, we would
be better to be worried. However, it is finding the small 4 KiB regions having
higher access frequency successfully. The 4 KiB region is not because the
region is having 4 KiB VMA, but the address region shows high access frequency.

>
> Typical characteristics of a large Android app is that it has
> thousands of vma and very large virtual address spaces:
> ~/damo # pmap 2550 | wc -l
> 8522
>
> ~/damo # pmap 2550
> ...
> 0000007992bbe000 4K r---- [ anon ]
> 0000007992bbf000 24K rw--- [ anon ]
> 0000007fe8753000 4K ----- [ anon ]
> 0000007fe8754000 8188K rw--- [ stack ]
> total 36742112K
>
> Because the whole vma list is too long, I have put the list here for
> you to download:
> wget http://www.linuxep.com/patches/android-app-vmas
>
> I can reproduce this problem on other Apps like youtube as well.
> I suppose we need to boost the algorithm of splitting regions for this
> kind of application.
> Any thoughts?

As mentioned above, this looks not a problem, as DAMON's monitoring regions is
not constructed based on VMAs but access patterns.

Nevertheless, I believe there are many rooms for improvement of DAMON's access
frequency. I want to implement fixed-gran monitoring feature first, and
develop some accuracy optimizations using the fixed-gran monitoring as
comparison target.

If I'm missing something or the explanation was not enough, please feel free to
let me know.


Thank,
SJ

>
> Thanks
> Barry
>