Re: [2.4 patch] fix CONFIG_X86_L1_CACHE_SHIFT

From: Jeff Garzik
Date: Mon Sep 08 2003 - 12:26:16 EST


On Mon, Sep 08, 2003 at 06:07:51PM +0100, Jamie Lokier wrote:
> Adrian Bunk wrote:
> > > Why requires? On x86, the cpu caches are fully coherent. A too small L1
> > > cache shift results in false sharing on SMP, but it shouldn't cause the
> > > described problems.
> > >...
> >
> > Thanks for the correction, I falsely thought CONFIG_X86_L1_CACHE_SHIFT
> > does something different than it does.
>
> Were there any changes in the kernel to do with PCI MWI settings?

Yes; I've lost the specific context of the thread, but I have been
working on MWI/cacheline size issues along with IvanK for a while.

It's apparently the responsibility of the OS to fill in correct
PCI_CACHE_LINE_SIZE values, which in the case of generic kernels must be
filled in at runtime (pci_cache_line_size) not at compile-time
(SMP_CACHE_BYTES, etc.)

If you don't call pci_set_mwi() for a PCI device, which triggers the
cacheline size fixups and other checks, then using
Memory-Write-Invalidate (MWI) is definitely wrong. Or on an older
kernel, without the latest MWI changes, you could wind up programming
cacheline size to a value smaller than your current processor (again,
due to generic kernels).

If a feature/device/whatever can be programmed with cacheline size at
runtime, that will always be the preference. With a compile-time
constant for cacheline size, you are _guaranteed_ it will be wrong in
some case.

Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/