Re: [PATCH 3/4] iio: adc: mcp3422: add hardware gain attribute

From: Jonathan Cameron
Date: Sun Nov 13 2022 - 07:22:36 EST


On Sat, 12 Nov 2022 22:19:07 +0100
Mitja Špes <mitja@xxxxxxxxx> wrote:

> Hi Jonathan,
>
> On Sat, Nov 12, 2022 at 6:20 PM Jonathan Cameron <jic23@xxxxxxxxxx> wrote:
> > How are the separate? We normally only use hardwaregain if
> > changing it has no input on the scale that we need to apply in userspace
> > to raw channels. This normally happens for two reasons
> > 1) There is a micro controller on the sensor that is doing a bunch of
> > maths so whilst changing the PGA value changes the range measurable it
> > doesn't affect the representation when we read from the device.
> > 2) The hardware gain is controlling say the sensitivity of a light sensor
> > in a time of flight device - it affects if we can get a measurement, but
> > not the measurement itself.
> >
> > Any of that true here?
> No. I see I misunderstood the hardwaregain attribute. For me this is a user
> friendly way of setting the gain and subsequently scale.
>
> What I don't understand is why set scale at all?

Key issue here is the ABI is not designed against one part. It is designed against
many. Also it is inherently biased in favour of the parts that were around when
we originally created it - I'll note that at that time the trade off of resolution
against sampling period (oversampling or cutting off the measurement) was not common.

That means userspace code has been written that assumes a certain set of attributes.
The cost of starting to use new attributes is high because no userspace code knows
about them. Hence we put a lot of effort into avoiding doing so. I agree that your
argument makes sense for your particular device - it doesn't for many other ADCs.

Many ADCs (I'd go as far as to say most though I could be wrong on that) do not
couple scale and sampling frequency at all.

> It's a result of sampling
> rate and gain settings. Using it as a setting, for which input value range also
> changes depending on another attribute is quite cumbersome.
> To use a sensor the program has to do this:
> 1. set the sampling rate
> 2. read available scales for this sampling rate
> 3. set the scale according to desired gain
> or know the scales for each sampling rate in advance...which makes available
> scales attribute quite useless.

For this last point, I think trying to match against previous scale makes a lot of
sense as there is considerable overlap available here between the different rates.
I think that would be an improvement. Another improvement would be to at least
expose the current resolution - that can be done by providing and _available
for the raw reading. Not an idea way to undestand what is going on but it would
make more data available to userspace. (_raw_available(max) * scale would give
the range of the device and allow an adjustment of the scale to achieve what the
user wants).

ABI design is hard. We don't always get it right and often there is no right answer.
Reality is that sometimes userspace code needs to search the space if it is trying
to get as close as possible to a particular set of constraints. However lets assume
in most cases the code has limits of:

1) A minimum data rate with which it is happy (controls
the sampling frequency - higher is normally fine, but wastes power etc).
To get best possible power usage (and in case of this device resolution) it will pick
the lowest sampling frequency to meet this constraint.

2) A range of values it is interested in (here related to the PGA settings but not
the sampling frequency). Adding _raw_avail would help it to have visibility of
what the range is.

3) A resolution it cares about getting data at (scale)

We have to present 'scale' because that's necessary to allow userspace to calculate the
actual voltage. That adds a constraint to the ABI design. We also don't want to provide
more overlapping controls than absolutely necessary as that makes it hard for userspace
to know which one to use.

So upshot is that userspace has to search to find a value that works for it.

In this particular case the set of ranges at all sampling frequencies are the same.
So if we assume a constraint on how often data is wanted is more important than the
resolution (have to pick one or the other to be more important) then we set that
first to the minimum value to meet the requirement. Then scale is tweaked to set
the PGA to match the range desired. Sure, not super clean but consistent with the
ABI as it stands (and we can't change that other than very very carefully).

As a fun side note, if the device (or driver) had justified the lower resolutions the other way
(so the zeros ended up in least significant bits) a common solution would have been
to just present that to userspace as is, thus the scale would have been decoupled from
the sampling frequency. Not this is what oversampling devices normally do...
We obviously could fake that now, but the issue would then be that it was a major
driver ABI change. So we can't.

Jonathan






>
> Kind regards,
> Mitja