Re: [patch 0/7] cpuset writeback throttling

From: Christoph Lameter
Date: Tue Nov 04 2008 - 21:46:54 EST


On Tue, 4 Nov 2008, Andrew Morton wrote:

In a memcg implementation what we would implement is "throttle
page-dirtying tasks in this memcg when the memcg's dirty memory reaches
40% of its total".

Right that is similar to what this patch does for cpusets. A memcg implementation would need to figure out if we are currently part of a memcg and then determine the percentage of memory that is dirty.

That is one aspect. When performing writeback then we need to figure out which inodes have dirty pages in the memcg and we need to start writeout on those inodes and not on others that have their dirty pages elsewhere. There are two components of this that are in this patch and that would also have to be implemented for a memcg.

But that doesn't solve the problem which this patchset is trying to
solve, which is "don't let all the memory in all this group of nodes
get dirty".

This patch would solve the problem if the calculation of the dirty pages would consider the active memcg and be able to determine the amount of dirty pages (through some sort of additional memcg counters). That is just the first part though. The second part of finding the inodes that have dirty pages for writeback would require an association between memcgs and inodes.

What happens if cpuset A uses nodes 0,1,2,3,4,5,6,7,8,9 and cpuset B
uses nodes 0,1? Can activity in cpuset A cause ooms in cpuset B?

Yes if the activities of cpuset A cause all pages to be dirtied in cpuset B and then cpuset B attempts to do writeback. This will fail to acquire enough memory for writeback and make reclaim impossible.

Typically cpusets are not overlapped like that but used to segment the system.

The system would work correctly if the dirty ratio calculation would be done on all overlapping cpusets/memcg groups that contain nodes from which allocations are permitted.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/