Re: [PATCH v3 2/2] firmware: arm_scmi: Augment SMC/HVC to allow optional parameter

From: Nikunj Kela
Date: Tue Apr 18 2023 - 10:21:14 EST



On 4/18/2023 2:58 AM, Sudeep Holla wrote:
On Mon, Apr 17, 2023 at 11:01:13AM -0700, Florian Fainelli wrote:
On 4/17/23 10:44, Nikunj Kela wrote:
This patch add support for passing shmem channel address as parameter
in smc/hvc call. This patch is useful when multiple scmi instances are
using same smc-id and firmware needs to distiguish among the instances.
Typo: distinguish.

It really would have been a lot clearer and made a whole lot more sense to
encode a VM ID/channel number within some of the SMCCC parameters, possibly
as part of the function ID itself.

Yes I was about to suggest this but then remembered one main reason I have
been given in the past against that:
If the system launches high number of VMs then that means loads of FID
needs to be reserved for SCMI and the hypervisor needs to support it.
Basically it is not scalable which I agree but not sure if such large
number of VMs are used in reality with SCMI. But I agree with the technical
reasoning.

The other reason why I preferred the shmem address itself instead of some
custom VM ID/channel number is that it can easily becomes vendor specific
for no real good reason and then we need to add support for each such
different parameters. Nikunj suggested getting them from DT which I really
don't like if the sole reason is just to identify the channel. We don't
have standard SCMI SMC/HVC but allowing such deviations with params from
DT will just explode with various combinations for silly/no reason.

Would you be ok to pass the smc/hvc parameters via kernel parameters in commandline? If so, I can implement that. I just wanted to keep everything in one place hence suggested using DTB node.

[...]

@@ -137,6 +144,8 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
if (ret < 0)
return ret;
+ if (of_device_is_compatible(dev->of_node, "arm,scmi-smc-param"))
+ scmi_info->param = res.start;
There is not even a check that this is going to be part of the kernel's view
of memory, that seems a bit brittle and possibly a security hole, too. Your
hypervisor presumably needs to have carved out some amount of memory in
order for the messages to be written to/read from, and so would the VM
kernel, so eventually we should have a 'reserved-memory' entry of some sort,
no?

Not disagreeing here. Just checking if my understanding is correct or not.
IIUC, we need reserved-memory if it is part of the memory presented to the
OS right ? You don't need that if it is dedicated memory like part of SRAM
or something similar.
We are not using reserved-memory node. Instead using sram-mmio to carve out shmem for scmi instances.
/*
* If there is an interrupt named "a2p", then the service and
* completion of a message is signaled by an interrupt rather than by
@@ -156,6 +165,7 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
}
scmi_info->func_id = func_id;
+ scmi_info->is_smc64 = ARM_SMCCC_IS_64(func_id);
scmi_info->cinfo = cinfo;
smc_channel_lock_init(scmi_info);
cinfo->transport_info = scmi_info;
@@ -188,7 +198,20 @@ static int smc_send_message(struct scmi_chan_info *cinfo,
shmem_tx_prepare(scmi_info->shmem, xfer, cinfo);
- arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res);
+#ifdef CONFIG_ARM64
+ /*
+ * if SMC32 convention is used, pass 64 bit address in
+ * two parameters
+ */
+ if (!scmi_info->is_smc64)
There is no need for scmi_info to store is_smc64, just check the func_id
here and declare is_smc64 as a local variable to the function.

+1
ACK!
Also, another way to approach this would be to encode the parameters region
in 4KB units such that event on a 32-bit system with LPAE you are guaranteed
to fit the region into a 32-bit unsigned long. AFAIR virtualization and LPAE
are indistinguishable on real CPUs?

Agree with the idea. But can a single 4kB be used for multiple shmem across
VMs ? IIUC the hypervisor can deal with that, so I prefer it if it is possible
practically.
We have multiple VMs and each VM has multiple instances. We will have quite a few domains and I am not sure if single page will suffice.
--
Regards,
Sudeep