Re: Regression: ftdi_sio is slow (since Wed Oct 10 15:05:06 2012)

From: Peter Hurley
Date: Sat May 04 2013 - 07:40:13 EST


On 05/04/2013 07:15 AM, Johan Hovold wrote:
On Sat, May 04, 2013 at 01:50:42AM +0400, Stas Sergeev wrote:
04.05.2013 00:34, Greg KH ÐÐÑÐÑ:
On Fri, May 03, 2013 at 10:27:18PM +0400, Stas Sergeev wrote:
03.05.2013 21:16, Greg KH ÐÐÑÐÑ:

[...]

There's no guarantee as to how long select or an ioctl will take, and
now that we have fixed another bug, this device is slower.

If you change hardware types to use a different usb to serial chip, that
select call might take 4 times as long. Are we somehow supposed to
change the kernel to "fix" that?
Previously, the kernel was not calling to a device at all, so
select() was independent of the chip, and it was fast. I was
not aware you changed that willingly.
I don't understand, what do you mean by this? Some drivers just return
the value of an internally held number, and don't query the device.

The only way the FTDI driver can determine if the hardware buffer on the
chip way out on the end of the USB cable is empty or not, is to query
it. So the driver now does so.
It does so only for one char. And the query takes longer than
to just xmit that char. So why do you think this even works as
expected?

The query takes longer than the transmit at decent baudrates (>=38k)
and under the assumption that flow control isn't causing any delays.

But you do have a point, and I have been meaning to look into whether
the added overhead of checking the hardware buffers could be mitigated
by adding wait_until_sent support to usb-serial. This way the we would
only query the hardware buffers on tty_wait_until_sent (e.g. at close)
and select and TIOCMOUTQ would not suffer. This is also the way things
are handled in serial_core.

Agreed. This is the correct solution.

I'll prepare a series which adds wait_until_sent to usb-serial, but I
doubt it would be stable material (even if it could get into 3.10).

What do you think Greg, is this overhead to chars_in_buffer reason
enough to disable it in the stable trees or should we simply fix it in
3.11 (or 3.10)? (The overhead is about 3-400 us per call when the port
fifo is empty, which makes chars_in_buffer about 100 times slower on my
test system.)

A better solution for stable would be to set port->drain_delay. It
won't help tcdrain() but at least the port won't shutdown on live
outbound data.

Regards,
Peter Hurley

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/