Re: [PATCH 1/7] mips: dmi: Fix early remap on MIPS32

From: Jiaxun Yang
Date: Fri Nov 24 2023 - 17:34:54 EST




在2023年11月24日十一月 下午6:52,Serge Semin写道:
> On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
>>
>>
>> 在2023年11月23日十一月 下午4:07,Thomas Bogendoerfer写道:
>> > On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
>> >>
>> [...]
>> >
>> > the problem with all 32bit unmapped segments is their limitations in
>> > size. But there is always room to try to use unmapped and fall back
>> > to mapped, if it doesn't work. But I doubt anybody is going to
>> > implement that.
>>
>> Yep, I guess fallback should be implemented for ioremap_cache as well.
>>
>> >
>> >> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
>> >> >> blindly will cause inconsistency.
>> >> >
>> >> > why ?
>> >>
>> >> Firmware sometimes does not flush those tables from cache back to memory.
>> >> For Loongson systems (as well as most MTI systems) cache is enabled by
>> >> firmware.
>> >
>> > kernel flushes all caches on startup, so there shouldn't be a problem.
>>
>> Actually dmi_setup() is called before cpu_cache_init().
>
> To preliminary sum the discussion, indeed there can be issues on the
> platforms which have DMI initialized on the cached region. Here are
> several solutions and additional difficulties I think may be caused by
> implementing them:
>
> 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
> method.
> This solution a bit clumsy than it looks on the first glance.
> ioremap_prot() can be used for various types of the cachability
> mapping. Currently it's a default-cacheable CA preserved in the
> _page_cachable_default variable and Write-combined CA saved in
> boot_cpu_data.writecombine. Based on that we would have needed to use
> the unmapped cached region utilized for the IO-remaps called with the
> "_page_cachable_default" mapping flags passed only. The rest of the IO
> range mappings, including the write-combined ones, would have been
> handled by VM means. This would have made the ioremap_prot() a bit
> less maintainable, but still won't be that hard to implement (unless I
> miss something):
> --- a/arch/mips/mm/ioremap.c
> +++ b/arch/mips/mm/ioremap.c
> /*
> - * Map uncached objects in the low 512mb of address space using KSEG1,
> - * otherwise map using page tables.
> + * Map uncached/default-cached objects in the low 512mb of address
> + * space using KSEG1/KSEG0, otherwise map using page tables.
> */
> - if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
> - flags == _CACHE_UNCACHED)
> - return (void __iomem *) CKSEG1ADDR(phys_addr);
> + if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
> + if (flags == _CACHE_UNCACHED)
> + return (void __iomem *) CKSEG1ADDR(phys_addr);
> + else if (flags == _page_cachable_default)
> + return (void __iomem *) CKSEG0ADDR(phys_addr);
> + }
>
A nip, _page_cachable_default is set in cpu_cache_init() as well. We'd
better move it to cpu-probe.c, or give it a reasonable default value.

Thanks
--
- Jiaxun