Re: [PATCH RFC 1/2] MAINTAINERS: Add "R:" designated-reviewers tag

From: Theodore Ts'o
Date: Tue Jun 03 2014 - 16:53:58 EST


On Tue, Jun 03, 2014 at 01:43:47PM -0400, Steven Rostedt wrote:
> > I know from experience that a "quick" 15 minute run on xfstests on a
> > ramdisk will catch 99% of typical problems a filesystem patch might
> > introduce. Code coverage reporting (done recently by RH QE
> > engineers) tells me that this covers between 50-70% of the
> > filesystem, VFS and MM subsystems (numbers vary with fs being
> > tested), and so that's a pretty good, fast smoke test that I can run
> > on any patch or patchset that is sent for review.
>
> Testing is fine, but I think you are undervaluing true review. Seems to
> me, all you ask for others to do is to run their test suite on the code
> to give a reviewed-by. I ask for people to look at the logic and spend
> a lot more time on seeing if something strange is there. I found people
> find things that tests usually don't. That's because before I post code
> for review, I usually run it under a lot of tests myself that weeds out
> most of those bugs.

I'm not sure why you guys are arguing so much about this ditinction
between testing versus review as if it were an either/or sort of
situation. Both are important; there are problems that will be caught
by the review that won't get caught using smoke tests, and often smoke
tests (assuming you have a good collection of tests for a particular
subsystem, which is not something which is a given for all subsystems)
can find problems that a desk check of the code can miss.

As far as who should run the tests, the way we do things in the ext4
world is that ideally the developer who is submitting the patch as
well as the maintainer should be running the tests, and I don't worry
so much about whether the reviewer is running the tests. If I find
problems in my testing, I'll often point out this fact out to the
developer, and to try to gently push them to do tests before pushing
code out for review. The fact that I can point them at kvm-xfstests
which is a turnkey smoke test system which requires nothing running a
script and waiting 30 minutes (or 16 hours or so for a full test run
with the full set of configurations, which I will ask developers who
are making substantive changes to do instead of just the quick smoke
test).

The way I see it, if the developer and the maintainer is running
tests, it's not so clear to me that making the reviewer run the tests
as well adds enough value that it's worth requiring it.

The important thing to note here is that we do not have consensus
across all subsystems what Reviewed-by: means, and I think that's OK.
The Reviewed-by: is mostly of interest to the maintainer before the
patch is submitted to mainline. The value of keeping it in the git
commit logs after the fact seems to be a bit variable, although if
there are companies blindly using it as a performance metric and this
is driving down the quality of reviews, perhaps one could even argue
that including these tags in the git commit logs is actually adding
negative value. But I don't really care about that issue, because
like most maintainers, I know the reviewers by reputation, and whether
someone actually says "you can add my Reviewed-by" is actually not so
important as their comments on the patch, one way or another.

Regards,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/