Re: [PATCH v3 00/36] thunderbolt: Software connection manager improvements

From: Mika Westerberg
Date: Thu Mar 28 2019 - 12:56:29 EST


On Thu, Mar 28, 2019 at 03:17:57PM +0000, Mario.Limonciello@xxxxxxxx wrote:
> > -----Original Message-----
> > From: Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx>
> > Sent: Thursday, March 28, 2019 7:36 AM
> > To: linux-kernel@xxxxxxxxxxxxxxx
> > Cc: Michael Jamet; Yehezkel Bernat; Andreas Noever; Lukas Wunner; David S .
> > Miller; Andy Shevchenko; Christian Kellner; Limonciello, Mario; Mika Westerberg;
> > netdev@xxxxxxxxxxxxxxx
> > Subject: [PATCH v3 00/36] thunderbolt: Software connection manager
> > improvements
> >
> >
> > [EXTERNAL EMAIL]
> >
> > Hi,
> >
> > This is third iteration of the patch series intending to bring same kind of
> > functionality for older Apple systems than we have in PCs. Software
> > connection manager is used on Apple hardware with Light Ridge, Cactus Ridge
> > or Falcon Ridge controllers to create PCIe tunnels when a Thunderbolt
> > device is connected. Currently only one PCIe tunnel is supported. On newer
> > Alpine Ridge based Apple systems the driver starts the firmware which then
> > takes care creating tunnels.
> >
> > This series improves the software connection manager so that it will
> > support:
> >
> > - Full PCIe daisy chains (up to 6 devices)
> > - Display Port tunneling
> > - P2P networking
> >
> > We also add support for Titan Ridge based Apple systems where we can use
> > the same flows than with Alpine Ridge to start the firmware.
>
> It seems to me that there would be an expectation that PC system firmware and TBT controller
> firmware is configured to behave like Apple systems to use this SW connection manager
> instead of the ICM in AR/TR FW.
>
> Is there an intent to eventually offer a way to "side-step" the TBT ICM and try to use this instead
> without firmware support?

Yes, that's the intention.

> >
> > This applies on top of thunderbolt.git/next.
> >
> > Christian, Mario do you see any issues with patch [05/36] regarding bolt
> > and fwupd? The kernel is supposed to restart the syscall automatically so
> > userspace should not be affected but wanted to check with you.
>
> I don't see a problem for fwupd in this area.

OK, thanks for checking.

> > Previous version of the patch series can be viewed here:
> >
> > v2: https://lkml.org/lkml/2019/2/6/347
> > v1: https://lkml.org/lkml/2019/1/29/924
> >
> > Making v3 took longer than I anticipated mostly due to some issues I run
> > during testing the new changes. There are quite many changes so I dropped
> > the reviewed-by tags I got for v2. Below is the list of major changes from
> > the previous version:
> >
> > * Always set port->remote even in case of dual link connection.
> >
> > * Leave (DP, PCIe) tunnels up when the driver is unloaded. When loaded
> > back, it discovers the existing tunnels and updated data structures
> > accordingly. I noticed that the code in v2 did not support cases
> > properly when you unplug something before the driver gets loaded back.
> > This version tears down partial paths during discovery.
> >
> > * Do not automatically create PCIe tunnels. Instead we implement "user"
> > security level in the software connection manager as well taking
> > advantage of the existing sysfs interfaces. This allows user to disable
> > PCIe tunneling completely or implement different white listing
> > policies. Major distros include bolt system daemon that takes care of
> > this.
>
> This is a bit unfortunate. Is this because of IOMMU limitations in working
> with devices down the chain?

No, it just makes it possible to do things such as "disable all PCIe
tunneling", like the master switch we have in GNOME UI. Even if you have
full IOMMU support it still does not prevent misbehaving devices.

This also allows other kind of whitelisting like supporting devices from
certain "known" vendor only.

IOMMU is still the primary protection against DMA attacks.