Re: [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpucached charges

From: Michal Hocko
Date: Fri Jul 22 2011 - 05:55:14 EST


On Fri 22-07-11 08:56:52, KAMEZAWA Hiroyuki wrote:
> On Thu, 21 Jul 2011 14:30:12 +0200
> Michal Hocko <mhocko@xxxxxxx> wrote:
>
> > On Thu 21-07-11 19:54:11, KAMEZAWA Hiroyuki wrote:
> > > On Thu, 21 Jul 2011 10:28:10 +0200
[...]
> > > Assume 2 cpu SMP, (a special case), and 2 applications running under
> > > a memcg.
> > >
> > > - one is running in SCHED_FIFO.
> > > - another is running into mem_cgroup_do_charge() and call drain_all_stock_sync().
> > >
> > > Then, the application stops until SCHED_FIFO application release the cpu.
> >
> > It would have to back off during reclaim anyaway (because we check
> > cond_resched during reclaim), right?
> >
>
> just have cond_resched() on a cpu which calls some reclaim stuff. It will no help.

I do not understand what you are saying here. What I meant to say is
that the above example is not a big issue because SCHED_FIFO would throw
us away from the CPU during reclaim anyway so waiting for other CPUs
during draining will not too much overhead, although it definitely adds
some.

> > > In general, I don't think waiting for schedule_work() against multiple cpus
> > > is not quicker than short memory reclaim.
> >
> > You are right, but if you consider small groups then the reclaim can
> > make the situation much worse.
> >
>
> If the system has many memory and the container has many cgroup, memory is not
> small because ...to use cpu properly, you need memroy. It's a mis-configuration.

I don't think so. You might have small, well suited groups for a
specific workloads.

> > > Adding flush_work() here means that a context switch is requred before
> > > calling direct reclaim.
> >
> > Is that really a problem? We would context switch during reclaim if
> > there is something else that wants CPU anyway.
> > Maybe we could drain only if we get a reasonable number of pages back?
> > This would require two passes over per-cpu caches to find the number -
> > not nice. Or we could drain only those caches that have at least some
> > threshold of pages.
> >
> > > That's bad. (At leaset, please check __GFP_NOWAIT.)
> >
> > Definitely a good idea. Fixed.
> >
> > > Please find another way, I think calling synchronous drain here is overkill.
> > > There are not important file caches in the most case and reclaim is quick.
> >
> > This is, however, really hard to know in advance. If there are used-once
> > unmaped file pages then it is much easier to reclaim them for sure.
> > Maybe I could check the statistics and decide whether to drain according
> > pages we have in the group. Let me think about that.
> >
> > > (And async draining runs.)
> > >
> > > How about automatically adjusting CHARGE_BATCH and make it small when the
> > > system is near to limit ?
> >
> > Hmm, we are already bypassing batching if we are close to the limit,
> > aren't we? If we get to the reclaim we fallback to nr_pages allocation
> > and so we do not refill the stock.
> > Maybe we could check how much we have reclaimed and update the batch
> > size accordingly.
> >
>
> Please wait until "background reclaim" stuff. I don't stop it and it will
> make this cpu-caching stuff better because we can drain before hitting
> limit.

As I said I haven't seen this hurting us so this can definitely wait.
I will drop the patch for now and keep just the clean up stuff. I will
repost it when I have some numbers in hands or if I am able to
workaround the current issues with too much waiting problem.

>
> If you cannot wait....
>
> One idea is to have a threshold to call async "drain". For example,
>
> threshould = limit_of_memory - nr_online_cpu() * (BATCH_SIZE + 1)
>
> if (usage > threshould)
> drain_all_stock_async().
>
> Then, situation will be much better.

Will think about it. I am not sure whether this is too rough.

> Thanks,
> -Kame

--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/