Re: [PATCH 2/2] sched: Rewrite per entity runnable load average tracking

From: Yuyang Du
Date: Tue Jul 08 2014 - 04:12:27 EST


Thanks, Ben,

On Mon, Jul 07, 2014 at 03:25:07PM -0700, bsegall@xxxxxxxxxx wrote:
>
> Yeah, while this is technically limited to 1/us (per cpu), it is still
> much higher - the replaced code would do updates generally only on
> period overflow (1ms) and even then only with nontrivial delta.
>

Will update it in "batch" mode as I replied to Peter. Whether or not set
up a threshold to not update trivial delta, will see.

> Also something to note is that cfs_rq->load_avg just takes samples of
> load.weight every 1us, which seems unfortunate. We thought this was ok
> for p->se.load.weight, because it isn't really likely for userspace to
> be calling nice(2) all the time, but wake/sleeps are more frequent,
> particularly on newer cpus. Still, it might not be /that/ bad.

The sampling of cfs_rq->load.weight should be equivalent to the current
code in that at the end of day cfs_rq->load.weight worth of runnable would
contribute to runnable_load_avg/blocked_load_avg for both the current and
the rewrite.

> Also, as a nitpick/annoyance this does a lot of
> if (entity_is_task(se)) __update_load_avg(... se ...)
> __update_load_avg(... cfs_rq_of(se) ...)
> which is just a waste of the avg struct on every group se, and all it
> buys you is the ability to not have a separate rq->avg struct (removed
> by patch 1) instead of rq->cfs.avg.

I actually struggled on this issue. As we only need a sched_avg for task (not
entity), and a sched_avg for cfs_rq, I planned to move entity avg to task. Good?

So left are the migrate_task_rq_fair() not holding lock issue and cfs_rq->avg.load_avg
overflow issue. I need some time to study them.

Overall, I think none of these issues are originally caused by combination/split
of runnable and blocked. It is just a matter of how synchronized we want to be
(this rewrite is the most synchronized), and the workaround I need to borrow from the
current codes.

Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/