Re: [PATCH] Hisilicon LPC driver

From: Arnd Bergmann
Date: Wed Dec 02 2015 - 08:37:20 EST


On Wednesday 02 December 2015 18:11:14 Rongrong Zou wrote:
> å 2015/12/1 18:00, Arnd Bergmann åé:
> > On Tuesday 01 December 2015 15:58:36 Rongrong Zou wrote:
> >> å 2015/11/30 21:19, Arnd Bergmann åé:
> >>> On Monday 30 November 2015 21:07:17 Rongrong Zou wrote:
> >>>> + LPC_REG_WRITE(lpc_dev->regs + HS_LPC_REG_IRQ_ST, HS_LPC_IRQ_CLEAR);
> >>>> + retry = 0;
> >>>> + while (0 == (LPC_REG_READ(lpc_dev->regs + HS_LPC_REG_OP_STATUS,
> >>>> + lpc_op_state_value) & HS_LPC_STATUS_DILE)) {
> >>>> + udelay(1);
> >>>> + retry++;
> >>>> + if (retry >= 10000) {
> >>>> + dev_err(lpc_dev->dev, "lpc W, wait idle time out\n");
> >>>> + return -ETIME;
> >>>> + }
> >>>> + }
> >>>
> >>> Better release the spinlock here and call a sleeping function for the wait.
> >>> If the timeout is 10ms, you definitely don't want to keep interrupts disabled
> >>> the whole time.
> >>>
> >>> If you can't find a good way to retry after getting the lock back, maybe
> >>> use a mutex here that you can keep locked the whole time.
> >>>
> >>
> >> The interface "lpc_io_read_byte" may be called in IRQ context by UART driver,
> >> and in process context by ipmi driver.
> >
> > inb/outb cannot return an error though, so the timeout handling will
> > have to change.
> >
> > How did you determine the 10ms timeout? What is the scenario in which
> > the bus takes an extended time, or times out?
>
> I check the SoC design,the bus hardware wait 65536 cycles(33M clock) befor time out.
> It is about 2ms long, so 10ms is a too long time. Absence of the connected device will
> cause the time out.

So you mean any access to an I/O port that isn't there can take several
milliseconds? That does seem really long still (though slightly better than
10ms or more that you are waiting above).

> >> It is not a PCI I/O space, although we want access it like IO space.
> >> Could you explain how to hook up to the generic inb/outb functions.
> >
> > It's the same thing really, and we really want all I/O space to show
> > up in /proc/ioports and be accessible through a common interface.
> > As this is supposed to be ISA compatible, I think you may want to
> > enforce this one to come first, so all ISA drivers see the respective
> > devices at low port numbers that may be hardwired. This is also required
> > for ISAPNP operation.
>
> This is what i want, but i still have some problem with the implemention.
> Do you mean I should redefine inb/outb in arch/arm64/kernel?
>
> My sulotion: redefine inb(addr)
> inb(addr)
> {
> if (addr is legacy_io_addr) {
> call lpc_io_inb
> }
> else {
> call readb(PCI_IOBASE + addr);
> }
> }

Yes, that would work, but also needs some if(IS_ENABLED(CONFIG_BROKEN_IOPORT))
etc. We probably don't want to enable this by default, only when building
a kernel for this SoC.

I would also make it a callback pointer, so your driver can be a
loadable module, and we can have different implementations in case
someone else also has their own incompatible LPC or PCI host bridge.

> > I would expect that the I/O space on your LPC bus is muxed with the
> > PCI I/O space as it is typically done for x86 machines as well, can
> > you check if that is the case?
>
> The legacy IO space can be reserved When we request IO resource in PCI in
> our platform, but I'm not sure it can be done in other ARM SoC. The legacy
> ISA IO is specified in PC99 specification,not in ARM.

LPC is a standard bus, and is implemented in all kinds of hardware, it has
nothing to do with the CPU architecture. Usually it is part of the PCI
host bridge, but it can also be a separate PCIe chip.

Reserving the whole legacy range is problematic if you also want to have
a VGA card, as the VGA BIOS (e.g. for nvidia) typically needs to access
some of the legacy VGA I/O ports for its POST. If the hardware doesn't
mux between the LPC and PCI I/O ports, you may have to do this in your
driver, so it tries to access both.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/