Re: [PATCH 5/5] workqueue: Allow modifying low level unbound workqueue cpumask

From: Frederic Weisbecker
Date: Tue May 20 2014 - 16:08:37 EST


On Tue, May 20, 2014 at 03:56:56PM -0400, Tejun Heo wrote:
> > > Hmmm... but there's nothing which makes rolling back more likely to
> > > succeed compared to the original applications. It's gonna allocate
> > > more pwqs. Triggering WARN_ON_ONCE() seems weird.
> >
> > Yeah but that's the least we can do. If we fail to even recover the old cpumask,
> > the user should know about the half state fail.
>
> I'm failing to see how it'd be better than just going through applying
> the new mask if we're likely to end up with half-updated states
> anyway. What's the point of another layer of best effort logic which
> is more likely to fail?

If the error is -ENOMEM then yeah, but any other error wants rollback.

> > But it's going to imply fun with double linked list of struct pwq_allocation_object
> > and stuff. Or maybe an array. This reminds be a bit generate_sched_domains(). It's
> > not going to be _that_ simple nor pretty :)
>
> Is it tho? Don't we just need to keep a separate staging copy of
> prepared pwq_tbl? The commit stage can be pwq_tbl installation.
> Looks like it shouldn't be too much of problem. Am I missing
> something?

Sure, that still need an iteration array/list of pre-allocated objects.
Expect at least one more hundred lines.

>
> > > 2. Proper error handling is hard. Just do pr_warn() on each failure
> > > and continue to try to apply and always return 0.
> > >
> > > If #1 isn't too complicated (would it be?), it'd be the better option;
> > > otherwise, well, #2 should work most of the time, eh?
> >
> > Yeah I think #2 should be way enough 99% of the time :)
>
> Yeah, if #1 gets too hairy, #2 can be a reluctant option but if #1 is
> doable without too much complication, I'd much prefer proper error
> handling.

I can try yeah.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/