Re: Potentially unbounded allocations in seq_read?

From: Tvrtko Ursulin
Date: Thu Dec 12 2013 - 09:52:39 EST


On Thu, 2013-12-12 at 14:21 +0000, Al Viro wrote:
> On Thu, Dec 12, 2013 at 01:59:31PM +0000, Tvrtko Ursulin wrote:
>
> > > a) *what* errors other than -ENAMETOOLONG?
> >
> > Is this your way of saying there can't be any other errors from d_path?
>
> Check yourself... It is the only error that makes sense there and yes,
> it is the only one being returned.

Cool then, thats why I e-mailed you in the first place, because you know
this area very well.

> > > b) d_path() not fitting into 2Mb is definitely a bug. If you really have
> > > managed to get a dentry tree 1 million levels deep, you have much worse
> > > problems.
> > > c) which kernel version it is?
> >
> > 3.10. I can't imagine this is an actual dentry tree somewhere, probably
> > just a bug of some sort. I'll probably hunt it down completely some time
> > next week, time permitting.
>
> Sounds like missing backport of 118b23 ("cope with potentially long ->d_dname()
> output for shmem/hugetlb"). It *is* in -stable (linux-3.10.y has it since
> 3.10.17 as commit ad4c3c), but if your tree doesn't have it, that's the one
> to try first...

Yes that fixed it, thanks!

Regards,

Tvrtko


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/