Re: [PATCH -mm -v3] mm, swap: Sort swap entries before free

From: Huang\, Ying
Date: Fri Apr 21 2017 - 08:29:45 EST


"Huang, Ying" <ying.huang@xxxxxxxxx> writes:

> Minchan Kim <minchan@xxxxxxxxxx> writes:
>
>> On Wed, Apr 19, 2017 at 04:14:43PM +0800, Huang, Ying wrote:
>>> Minchan Kim <minchan@xxxxxxxxxx> writes:
>>>
>>> > Hi Huang,
>>> >
>>> > On Fri, Apr 07, 2017 at 02:49:01PM +0800, Huang, Ying wrote:
>>> >> From: Huang Ying <ying.huang@xxxxxxxxx>
>>> >>
>>> >> void swapcache_free_entries(swp_entry_t *entries, int n)
>>> >> {
>>> >> struct swap_info_struct *p, *prev;
>>> >> @@ -1075,6 +1083,10 @@ void swapcache_free_entries(swp_entry_t *entries, int n)
>>> >>
>>> >> prev = NULL;
>>> >> p = NULL;
>>> >> +
>>> >> + /* Sort swap entries by swap device, so each lock is only taken once. */
>>> >> + if (nr_swapfiles > 1)
>>> >> + sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL);
>>> >
>>> > Let's think on other cases.
>>> >
>>> > There are two swaps and they are configured by priority so a swap's usage
>>> > would be zero unless other swap used up. In case of that, this sorting
>>> > is pointless.
>>> >
>>> > As well, nr_swapfiles is never decreased so if we enable multiple
>>> > swaps and then disable until a swap is remained, this sorting is
>>> > pointelss, too.
>>> >
>>> > How about lazy sorting approach? IOW, if we found prev != p and,
>>> > then we can sort it.
>>>
>>> Yes. That should be better. I just don't know whether the added
>>> complexity is necessary, given the array is short and sort is fast.
>>
>> Huh?
>>
>> 1. swapon /dev/XXX1
>> 2. swapon /dev/XXX2
>> 3. swapoff /dev/XXX2
>> 4. use only one swap
>> 5. then, always pointless sort.
>
> Yes. In this situation we will do unnecessary sorting. What I don't
> know is whether the unnecessary sorting will hurt performance in real
> life. I can do some measurement.

I tested the patch with 1 swap device and 1 process to eat memory
(remove the "if (nr_swapfiles > 1)" for test). I think this is the
worse case because there is no lock contention. The memory freeing time
increased from 1.94s to 2.12s (increase ~9.2%). So there is some
overhead for some cases. I change the algorithm to something like
below,

void swapcache_free_entries(swp_entry_t *entries, int n)
{
struct swap_info_struct *p, *prev;
int i;
+ swp_entry_t entry;
+ unsigned int prev_swp_type;

if (n <= 0)
return;

+ prev_swp_type = swp_type(entries[0]);
+ for (i = n - 1; i > 0; i--) {
+ if (swp_type(entries[i]) != prev_swp_type)
+ break;
+ }
+
+ /* Sort swap entries by swap device, so each lock is only taken once. */
+ if (i)
+ sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL);
prev = NULL;
p = NULL;
for (i = 0; i < n; ++i) {
- p = swap_info_get_cont(entries[i], prev);
+ entry = entries[i];
+ p = swap_info_get_cont(entry, prev);
if (p)
- swap_entry_free(p, entries[i]);
+ swap_entry_free(p, entry);
prev = p;
}
if (p)

With this patch, the memory freeing time increased from 1.94s to 1.97s.
I think this is good enough. Do you think so?

I will send out the formal patch soon.

Best Regards,
Huang, Ying