Re: [PATCH v1] misc: fastrpc: Collect driver-remote processor transaction logs

From: Ekansh Gupta
Date: Wed Jun 07 2023 - 02:51:51 EST




On 6/6/2023 11:12 PM, Greg KH wrote:
On Tue, Jun 06, 2023 at 10:25:55PM +0530, Ekansh Gupta wrote:
Add changes to collect driver-remote processor rpmsg transaction
logs. These logs will carry payload information for the rpmsg message
instance. These logs are channel specific and are collected in
channel context structure.

These rpmsg transaction logs can help in improving debugability as
all requests from processes are getting captured in channel context
structure.

Signed-off-by: Ekansh Gupta <quic_ekangupt@xxxxxxxxxxx>
---
drivers/misc/fastrpc.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 88 insertions(+)

diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
index 30d4d04..6447cee 100644
--- a/drivers/misc/fastrpc.c
+++ b/drivers/misc/fastrpc.c
@@ -104,6 +104,9 @@
#define miscdev_to_fdevice(d) container_of(d, struct fastrpc_device, miscdev)
+/* Length of glink transaction history to store */
+#define GLINK_MSG_HISTORY_LEN (128)
+
static const char *domains[FASTRPC_DEV_MAX] = { "adsp", "mdsp",
"sdsp", "cdsp"};
struct fastrpc_phy_page {
@@ -181,6 +184,28 @@ struct fastrpc_invoke_rsp {
int retval; /* invoke return value */
};
+struct fastrpc_tx_msg {
+ struct fastrpc_msg msg; /* Msg sent to remote subsystem */
+ int rpmsg_send_err; /* rpmsg error */
+ s64 ns; /* Timestamp (in ns) of msg */
+};
+
+struct fastrpc_rx_msg {
+ struct fastrpc_invoke_rsp rsp; /* Response from remote subsystem */
+ s64 ns; /* Timestamp (in ns) of response */
+};
+
+struct fastrpc_rpmsg_log {
+ u32 tx_index; /* Current index of 'tx_msgs' array */
+ u32 rx_index; /* Current index of 'rx_msgs' array */
+ /* Rolling history of messages sent to remote subsystem */
+ struct fastrpc_tx_msg tx_msgs[GLINK_MSG_HISTORY_LEN];
+ /* Rolling history of responses from remote subsystem */
+ struct fastrpc_rx_msg rx_msgs[GLINK_MSG_HISTORY_LEN];
+ spinlock_t tx_lock;
+ spinlock_t rx_lock;

Why roll your own ring-buffer logic instead of using one of the many
offerings that the kernel already provides for you?

Thanks for your suggestion, Greg. I'm looking into using kernel offered
ring-buffer logic and I'll update it in v2.
thanks,

greg k-h