Re: Potentially unbounded allocations in seq_read?

From: Al Viro
Date: Wed Dec 11 2013 - 13:00:18 EST


On Wed, Dec 11, 2013 at 05:48:32PM +0000, Tvrtko Ursulin wrote:
> On Wed, 2013-12-11 at 17:04 +0000, Tvrtko Ursulin wrote:
> > Hi all,
> >
> > It seems that the buffer allocation in seq_read can double in size
> > indefinitely, at least I've seen that in practice with /proc/<pid>/smaps
> > (attempting to double m->size to 4M on a read of 1000 bytes). This
> > produces an ugly WARN_ON_ONCE, which should perhaps be avoided? (given
> > that it can be triggered by userspace at will)
> >
> > From the top comment in seq_file.c one would think that it is a
> > fundamental limitation of the current code that everything which will be
> > read (even if in chunks) needs to be in the kernel side buffer at the
> > same time?
>
> Oh-oh, seems that m->size is doubled on every read. So if app is reading
> with a buffer smaller than data available, it can do nine reads before
> it hits a >MAX_ORDER allocation. Not good. :)

Huh? Is that from observation or from reading the code? If it's the former,
I would really like to see details, if it's the latter... you are misreading
it. m->size is doubled until it's large enough to hold ->show() output;
size argument of seq_read() has nothing to do with that. Once the damn
thing is large enough, read() is served from it. So are subsequent reads,
until you manage to eat all that had been generated. Then the same buffer
is used for the next entry; again, no doubling unless that next entry is
even bigger and won't fit. Doubling on each read(2) takes really strange
iterator to trigger and you'll need ->show() spewing bigger and bigger
entries. Again - details, please...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/