Re: [PATCH 1/5] vmscan: remove all_unreclaimable check from directreclaim path completely

From: Minchan Kim
Date: Thu Mar 24 2011 - 03:25:51 EST


On Thu, Mar 24, 2011 at 4:03 PM, KOSAKI Motohiro
<kosaki.motohiro@xxxxxxxxxxxxxx> wrote:
>> On Thu, Mar 24, 2011 at 3:16 PM, KOSAKI Motohiro
>> <kosaki.motohiro@xxxxxxxxxxxxxx> wrote:
>> > Hi
>> >
>> >> Thanks for your effort, Kosaki.
>> >> But I still doubt this patch is good.
>> >>
>> >> This patch makes early oom killing in hibernation as it skip
>> >> all_unreclaimable check.
>> >> Normally, Âhibernation needs many memory so page_reclaim pressure
>> >> would be big in small memory system. So I don't like early give up.
>> >
>> > Wait. When occur big pressure? hibernation reclaim pressure
>> > (sc->nr_to_recliam) depend on physical memory size. therefore
>> > a pressure seems to don't depend on the size.
>>
>> It depends on physical memory size and /sys/power/image_size.
>> If you want to tune image size bigger, reclaim pressure would be big.
>
> Ok, _If_ I want.
> However, I haven't seen desktop people customize it.
>
>
>> >> Do you think my patch has a problem? Personally, I think it's very
>> >> simple and clear. :)
>> >
>> > To be honest, I dislike following parts. It's madness on madness.
>> >
>> > Â Â Â Âstatic bool zone_reclaimable(struct zone *zone)
>> > Â Â Â Â{
>> > Â Â Â Â Â Â Â Âif (zone->all_unreclaimable)
>> > Â Â Â Â Â Â Â Â Â Â Â Âreturn false;
>> >
>> > Â Â Â Â Â Â Â Âreturn zone->pages_scanned < zone_reclaimable_pages(zone) * 6;
>> > Â Â Â Â}
>> >
>> >
>> > The function require a reviewer know
>> >
>> > Âo pages_scanned and all_unreclaimable are racy
>>
>> Yes. That part should be written down of comment.
>>
>> > Âo at hibernation, zone->all_unreclaimable can be false negative,
>> > Â but can't be false positive.
>>
>> The comment of all_unreclaimable already does explain it well, I think.
>
> Where is?
>
>
>> > And, a function comment of all_unreclaimable() says
>> >
>> > Â Â Â Â /*
>> > Â Â Â Â Â* As hibernation is going on, kswapd is freezed so that it can't mark
>> > Â Â Â Â Â* the zone into all_unreclaimable. It can't handle OOM during hibernation.
>> > Â Â Â Â Â* So let's check zone's unreclaimable in direct reclaim as well as kswapd.
>> > Â Â Â Â Â*/
>> >
>> > But, now it is no longer copy of kswapd algorithm.
>>
>> The comment don't say it should be a copy of kswapd.
>
> I meant the comments says
>
> ÂÂ Â Â Â Â* So let's check zone's unreclaimable in direct reclaim as well as kswapd.
>
> but now it isn't aswell as kswapd.
>
> I think it's critical important. If people can't understand why the
> algorithm was choosed, anyone will break the code again sooner or later.
>
>
>> > If you strongly prefer this idea even if you hear above explanation,
>> > please consider to add much and much comments. I can't say
>> > current your patch is enough readable/reviewable.
>>
>> My patch isn't a formal patch for merge but just a concept to show.
>> If you agree the idea, of course, I will add more concrete comment
>> when I send formal patch.
>>
>> Before, I would like to get a your agreement. :)
>> If you solve my concern(early give up in hibernation) in your patch, I
>> don't insist on my patch, either.
>
> Ok. Let's try.
>
> Please concern why priority=0 is not enough. zone_reclaimable_pages(zone) * 6
> is a conservative value of worry about multi thread race. While one task
> is reclaiming, others can allocate/free memory concurrently. therefore,
> even after priority=0, we have a chance getting reclaimable pages on lru.
> But, in hibernation case, almost all tasks was freezed before hibernation
> call shrink_all_memory(). therefore, there is no race. priority=0 reclaim
> can cover all lru pages.
>
> Is this enough explanation for you?

For example, In 4G desktop system.
32M full scanning and fail to reclaim a page.
It's under 1% coverage.

Is it enough to give up?
I am not sure.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/