Re: [net-next Patch V4 4/4] octeontx2-pf: Add support for HTB offload

From: Hariprasad Kelam
Date: Mon Feb 13 2023 - 01:17:14 EST


Please see inline,

> On Fri, Feb 10, 2023 at 04:40:51PM +0530, Hariprasad Kelam wrote:
> > From: Naveen Mamindlapalli <naveenm@xxxxxxxxxxx>
> >
> > This patch registers callbacks to support HTB offload.
> >
> > Below are features supported,
> >
> > - supports traffic shaping on the given class by honoring rate and
> > ceil configuration.
> >
> > - supports traffic scheduling, which prioritizes different types of
> > traffic based on strict priority values.
> >
> > - supports the creation of leaf to inner classes such that parent node
> > rate limits apply to all child nodes.
> >
> > Signed-off-by: Naveen Mamindlapalli <naveenm@xxxxxxxxxxx>
> > Signed-off-by: Hariprasad Kelam <hkelam@xxxxxxxxxxx>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@xxxxxxxxxxx>
> > ---
> > .../ethernet/marvell/octeontx2/af/common.h | 2 +-
> > .../ethernet/marvell/octeontx2/nic/Makefile | 2 +-
> > .../marvell/octeontx2/nic/otx2_common.c | 37 +-
> > .../marvell/octeontx2/nic/otx2_common.h | 7 +
> > .../marvell/octeontx2/nic/otx2_ethtool.c | 31 +-
> > .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 47 +-
> > .../ethernet/marvell/octeontx2/nic/otx2_reg.h | 13 +
> > .../ethernet/marvell/octeontx2/nic/otx2_tc.c | 7 +-
> > .../net/ethernet/marvell/octeontx2/nic/qos.c | 1541 +++++++++++++++++
> > .../net/ethernet/marvell/octeontx2/nic/qos.h | 56 +-
> > .../ethernet/marvell/octeontx2/nic/qos_sq.c | 20 +-
> > include/net/sch_generic.h | 2 +
> > net/sched/sch_generic.c | 5 +-
> > 13 files changed, 1741 insertions(+), 29 deletions(-) create mode
> > 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.c
>
> nit: this patch is rather long
>
Ok, Will try to break the whole patch into two.
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > index 4cb3fab8baae..5653b06d9dd8 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
>
> ...
>
> > +int otx2_txschq_stop(struct otx2_nic *pfvf) {
> > + int lvl, schq;
> > +
> > + /* free non QOS TLx nodes */
> > + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
> > + otx2_txschq_free_one(pfvf, lvl,
> > + pfvf->hw.txschq_list[lvl][0]);
> >
> > /* Clear the txschq list */
> > for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++)
> > pfvf->hw.txschq_list[lvl][schq] = 0;
> > }
> > - return err;
> > +
> > + return 0;
>
> nit: This function always returns 0. Perhaps it's return value could be null
> and the error handling code at the call sites removed.

ACK, the caller is checking for the error, will modify the return value.
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> > b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> > new file mode 100644
> > index 000000000000..2d8189ece31d
> > --- /dev/null
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
>
> ...
>
> > +int otx2_clean_qos_queues(struct otx2_nic *pfvf) {
> > + struct otx2_qos_node *root;
> > +
> > + root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> > + if (!root)
> > + return 0;
> > +
> > + return otx2_qos_update_smq(pfvf, root, QOS_SMQ_FLUSH); }
>
> nit: It seems that the return code of this function is always ignored by
> callers. Perhaps either the call sites should detect errors, or the
> return type of this function should be changed to void.
>
ACK, will modify the return value.
> > +
> > +void otx2_qos_config_txschq(struct otx2_nic *pfvf) {
> > + struct otx2_qos_node *root;
> > + int err;
> > +
> > + root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> > + if (!root)
> > + return;
> > +
> > + err = otx2_qos_txschq_config(pfvf, root);
> > + if (err) {
> > + netdev_err(pfvf->netdev, "Error update txschq
> configuration\n");
> > + goto root_destroy;
> > + }
> > +
> > + err = otx2_qos_txschq_push_cfg_tl(pfvf, root, NULL);
> > + if (err) {
> > + netdev_err(pfvf->netdev, "Error update txschq
> configuration\n");
> > + goto root_destroy;
> > + }
> > +
> > + err = otx2_qos_update_smq(pfvf, root, QOS_CFG_SQ);
> > + if (err) {
> > + netdev_err(pfvf->netdev, "Error update smq
> configuration\n");
> > + goto root_destroy;
> > + }
> > +
> > + return;
> > +
> > +root_destroy:
> > + otx2_qos_root_destroy(pfvf);
> > +}
>
> I am curious as to why the root is destroyed here.
> But such cleanup doesn't apply to other places where
> otx2_qos_txschq_config() is called.
>

There are two use cases,

1. otx2_qos_txschq_config is called while creating a new class, on this function fails we are simply returning an error.
2. if QOS commands are installed in the interface down state, " otx2_qos_config_txschq" this function is called during interface UP,
To configure the hardware CSRs for all classes.
We are deleting the root if the driver failed to configure the HW. We missed adding message, will add in next version.

>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > index ef8c99a6b2d0..d8e32a6e541d 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
>
> ...
>
> > struct otx2_qos {
> > + DECLARE_HASHTABLE(qos_hlist,
> order_base_2(OTX2_QOS_MAX_LEAF_NODES));
> > + DECLARE_BITMAP(qos_sq_bmap, OTX2_QOS_MAX_LEAF_NODES);
> > u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
> > + u16 maj_id;
> > + u16 defcls;
>
> On x86_64 there is a 4 byte hole here...
>
> > + struct list_head qos_tree;
> > + struct mutex qos_lock; /* child list lock */
> > + u8 link_cfg_lvl; /* LINKX_CFG CSRs mapped to TL3 or TL2's index ?
> > +*/
>
> And link_cfg_lvl is on a cacheline all by itself.
>
> I'm not sure if it makes any difference, but pehraps it makes more sense to
> place link_cfg_lvl in the hole above.

ACK, will address the suggested below changes in next version.

Thanks,
Hariprasad k
> $ pahole drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.o
> ...
> struct otx2_qos {
> struct hlist_head qos_hlist[16]; /* 0 128 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> long unsigned int qos_sq_bmap[1]; /* 128 8 */
> u16 qid_to_sqmap[16]; /* 136 32 */
> u16 maj_id; /* 168 2 */
> u16 defcls; /* 170 2 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head qos_tree; /* 176 16 */
> /* --- cacheline 3 boundary (192 bytes) --- */
> struct mutex qos_lock; /* 192 160 */
> /* --- cacheline 5 boundary (320 bytes) was 32 bytes ago --- */
> u8 link_cfg_lvl; /* 352 1 */
>
> /* size: 360, cachelines: 6, members: 8 */
> /* sum members: 349, holes: 1, sum holes: 4 */
> /* padding: 7 */
> /* last cacheline: 40 bytes */
> };
> ...
>
> > +};
> > +
> > +struct otx2_qos_node {
> > + /* htb params */
> > + u32 classid;
>
> On x86_64 there is a 4 byte hole here,
>
> > + u64 rate;
> > + u64 ceil;
> > + u32 prio;
> > + /* hw txschq */
> > + u8 level;
>
> a one byte hole here,
>
> > + u16 schq;
> > + u16 qid;
> > + u16 prio_anchor;
>
> another four byte hole here,
>
> > + DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
> > + /* list management */
> > + struct hlist_node hlist;
>
> the first cacheline ends here,
>
> > + struct otx2_qos_node *parent; /* parent qos node */
>
> And this is an 8 byte entity.
>
> I'm not sure if it is advantagous,
> but I think parent could be moved to the first cacheline.
>
> > + struct list_head list;
> > + struct list_head child_list;
> > + struct list_head child_schq_list;
> > };
>
> $ pahole drivers/net/ethernet/marvell/octeontx2/nic/qos.o
> ...
> struct otx2_qos_node {
> u32 classid; /* 0 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> u64 rate; /* 8 8 */
> u64 ceil; /* 16 8 */
> u32 prio; /* 24 4 */
> u8 level; /* 28 1 */
>
> /* XXX 1 byte hole, try to pack */
>
> u16 schq; /* 30 2 */
> u16 qid; /* 32 2 */
> u16 prio_anchor; /* 34 2 */
>
> /* XXX 4 bytes hole, try to pack */
>
> long unsigned int prio_bmap[1]; /* 40 8 */
> struct hlist_node hlist; /* 48 16 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> struct otx2_qos_node * parent; /* 64 8 */
> struct list_head list; /* 72 16 */
> struct list_head child_list; /* 88 16 */
> struct list_head child_schq_list; /* 104 16 */
>
> /* size: 120, cachelines: 2, members: 14 */
> /* sum members: 111, holes: 3, sum holes: 9 */
> /* last cacheline: 56 bytes */
> };
> ...