Re: cgroup, RT reservation per core(s)?

From: Rolando Martins
Date: Wed Feb 11 2009 - 07:10:26 EST


On 2/11/09, Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:
> * Peter Zijlstra <peterz@xxxxxxxxxxxxx> [2009-02-11 12:42:14]:
>
>
> > On Wed, 2009-02-11 at 11:33 +0000, Rolando Martins wrote:
> >
> > > Hi again,
> > >
> > > is there any way to have multiple "distinct" sched domains, i.e.:
> > > mount -t cgroup -o cpu none /dev/sched_domain_0
> > > .... setup sched_domain_0 (ex: 90% RT, 10% Others)
> > > mount -t cgroup -o cpu none /dev/sched_domain_1
> > > .... setup sched_domain_1 (ex: 20% RT, 80% Others)
> > > Then give sched_domain_0 to cpuset A and sched_domain_1 to B?
> >
> > Nope.
> >
> > We currently only support a single instance of a cgroup controller.
> >
> > I see the use for what you propose, however implementing that will be
> > 'interesting'.
>
>
> I am confused, if you cpusets, you get your own sched_domain. If you
> mount cpusets and cpu controller together, you'll get what you want.
> Is this a figment of my imagination. You might need to use exclusive
> CPUsets though.
>
> --
>
> Balbir
>
I don't know if you meant the following situation (mounting cpuset and
cpu together):

R
-----------------------
(80% RT, 20%Others) A B (100% RT, 0%
Others)
(Cpus 0-2) (CPU 3)

If so, we can't do this because of the restriction imposed by global
rt_runtime_ns.
Perhaps a "feasible" solution could be implemented by having distinct
global rt_runtime_ns (one for each cpu, i.e.: rt_runtime_ns_0; ...;
rt_runtime_n )

R
-----------------------
(80% RT, 20%Others) A B (100% RT, 0%
Others)
(Cpus 0-2) (CPU 3)
capacity_used_cpu_0_rt = 0.8 capacity_used_cpu_3_rt = 1
capacity_used_cpu_1_rt = 0.8
capacity_used_cpu_2_rt = 0.8

Given a i processor: we have the global restriction enforced:
SUM(capacity_used_cpu_i_rt) < rt_runtime_i

Rol
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/