Re: [RFC] [PATCH] cache pollution aware __copy_from_user_ll()

From: Hiro Yoshioka
Date: Mon Aug 15 2005 - 03:44:29 EST


Hi,

I appreciate your suggestion.

On 8/15/05, Arjan van de Ven <arjan@xxxxxxxxxxxxx> wrote:
>
> > Anyway we could not find the cache aware version of __copy_from_user_ll
> > has a big regression yet.
>
>
> that is because you spread the cache misses out from one place to all
> over the place, so that no one single point sticks out anymore.
>
> Do you agree that your copy is less optimal for the case where the
> kernel will (almost) immediately use the data?

Yes, I do.

My server has 8KB of L1 cache. (512KB of L2/2MB of L3)

If you move more than 4KB of data using by __copy_from_user_ll(), the
data will be spilled over L1 cache but in L2 (or L3)
When you move huge data (> 1MB), even L3 cache will not help you.
(This is known as a cache pollution.)

> I agree that your copy is really nice for places where the kernel will
> NOT use the data in the cpu, say for big write() system calls.
>
> My suggestion is to realize there are basically 2 different use cases,
> and that in the code the first one is very common, while in your
> profiles the second one is very common. Based on that I suggest to make
> a special copy_from_user_nocache() API for the cases where the kernel
> will not use the data (and ignore software raid5 here) and use your
> excellent version for that API, while leaving the code for the cases
> where the kernel WILL use the data alone. Code wise the "will use" case
> is the vast majority, so only changing the few places that know they
> don't use the data will be very efficient, and will give immediate big
> improvement in your profile data, since those few places tend to get
> used a lot in the cases you benchmark.

copy_from_user_nocache() is fine.

But I don't know where I can use it. (I'm not so
familiar with the linux kernel file system yet.)

Regards,
Hiro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/