Re: [PATCH v5 2/4] dt-bindings: touchscreen: add overlay-touchscreen and overlay-buttons properties

From: Javier Carrasco
Date: Thu Nov 23 2023 - 14:49:08 EST


Hi Jeff,

On 26.10.23 16:46, Jeff LaBundy wrote:
> Hi Javier,
>
> Thank you for continuing to drive this high-quality work.
>
> On Tue, Oct 17, 2023 at 01:00:08PM +0200, Javier Carrasco wrote:
>> The overlay-touchscreen object defines an area within the touchscreen
>> where touch events are reported and their coordinates get converted to
>> the overlay origin. This object avoids getting events from areas that
>> are physically hidden by overlay frames.
>>
>> For touchscreens where overlay buttons on the touchscreen surface are
>> provided, the overlay-buttons object contains a node for every button
>> and the key event that should be reported when pressed.
>>
>> Signed-off-by: Javier Carrasco <javier.carrasco@xxxxxxxxxxxxxx>
>> ---
>> .../bindings/input/touchscreen/touchscreen.yaml | 143 +++++++++++++++++++++
>> 1 file changed, 143 insertions(+)
>>
>> diff --git a/Documentation/devicetree/bindings/input/touchscreen/touchscreen.yaml b/Documentation/devicetree/bindings/input/touchscreen/touchscreen.yaml
>> index 431c13335c40..5c58eb79ee9a 100644
>> --- a/Documentation/devicetree/bindings/input/touchscreen/touchscreen.yaml
>> +++ b/Documentation/devicetree/bindings/input/touchscreen/touchscreen.yaml
>> @@ -87,6 +87,129 @@ properties:
>> touchscreen-y-plate-ohms:
>> description: Resistance of the Y-plate in Ohms
>>
>> + overlay-touchscreen:
>> + description: Clipped touchscreen area
>> +
>> + This object can be used to describe a frame that restricts the area
>> + within touch events are reported, ignoring the events that occur outside
>> + this area. This is of special interest if the touchscreen is shipped
>> + with a physical overlay on top of it with a frame that hides some part
>> + of the original touchscreen area.
>> +
>> + The x-origin and y-origin properties of this object define the offset of
>> + a new origin from where the touchscreen events are referenced.
>> + This offset is applied to the events accordingly. The x-size and y-size
>> + properties define the size of the overlay-touchscreen (effective area).
>> +
>> + The following example shows the new touchscreen area and the new origin
>> + (0',0') for the touch events generated by the device.
>> +
>> + Touchscreen (full area)
>> + ┌────────────────────────────────────────┐
>> + │ ┌───────────────────────────────┐ │
>> + │ │ │ │
>> + │ ├ y-size │ │
>> + │ │ │ │
>> + │ │ overlay-touchscreen │ │
>> + │ │ │ │
>> + │ │ │ │
>> + │ │ x-size │ │
>> + │ ┌└──────────────┴────────────────┘ │
>> + │(0',0') │
>> + ┌└────────────────────────────────────────┘
>> + (0,0)
>> +
>> + where (0',0') = (0+x-origin,0+y-origin)
>> +
>> + type: object
>> + $ref: '#/$defs/overlay-node'
>> + unevaluatedProperties: false
>> +
>> + required:
>> + - x-origin
>> + - y-origin
>> + - x-size
>> + - y-size
>> +
>> + overlay-buttons:
>> + description: list of nodes defining the buttons on the touchscreen
>> +
>> + This object can be used to describe buttons on the touchscreen area,
>> + reporting the touch events on their surface as key events instead of
>> + the original touch events.
>> +
>> + This is of special interest if the touchscreen is shipped with a
>> + physical overlay on top of it where a number of buttons with some
>> + predefined functionality are printed. In that case a specific behavior
>> + is expected from those buttons instead of raw touch events.
>> +
>> + The overlay-buttons properties define a per-button area as well as an
>> + origin relative to the real touchscreen origin. Touch events within the
>> + button area are reported as the key event defined in the linux,code
>> + property. Given that the key events do not provide coordinates, the
>> + button origin is only used to place the button area on the touchscreen
>> + surface. Any event outside the overlay-buttons object is reported as a
>> + touch event with no coordinate transformation.
>> +
>> + The following example shows a touchscreen with a single button on it
>> +
>> + Touchscreen (full area)
>> + ┌───────────────────────────────────┐
>> + │ │
>> + │ │
>> + │ ┌─────────┐ │
>> + │ │button 0 │ │
>> + │ │KEY_POWER│ │
>> + │ └─────────┘ │
>> + │ │
>> + │ │
>> + ┌└───────────────────────────────────┘
>> + (0,0)
>> +
>> + The overlay-buttons object can be combined with the overlay-touchscreen
>> + object as shown in the following example. In that case only the events
>> + within the overlay-touchscreen object are reported as touch events.
>> +
>> + Touchscreen (full area)
>> + ┌─────────┬──────────────────────────────┐
>> + │ │ │
>> + │ │ ┌───────────────────────┐ │
>> + │ button 0│ │ │ │
>> + │KEY_POWER│ │ │ │
>> + │ │ │ │ │
>> + ├─────────┤ │ overlay-touchscreen │ │
>> + │ │ │ │ │
>> + │ │ │ │ │
>> + │ button 1│ │ │ │
>> + │ KEY_INFO│ ┌└───────────────────────┘ │
>> + │ │(0',0') │
>> + ┌└─────────┴──────────────────────────────┘
>> + (0,0)
>> +
>> + type: object
>
> I am still confused why the buttons need to live under an 'overlay-buttons'
> parent node, which seems like an imaginary boundary. In my view, the touch
> surface comprises the following types of rectangular areas:
>
> 1. A touchscreen, wherein granular coordinates and pressure are reported.
> 2. A momentary button, wherein pressure is quantized into a binary value
> (press or release), and coordinates are ignored.
>
> Any contact that falls outside of (1) and (2) is presumed to be part of a
> border or matting, and is hence ignored.
>
> Areas (1) and (2) exist in the same "plane", so why can they not reside
> under the same parent node? The following seems much more representative
> of the actual hardware we intend to describe in the device tree:
>
> touchscreen {
> compatible = "...";
> reg = <...>;
>
> /* raw coordinates reported here */
> touch-area-1 {
> x-origin = <...>;
> y-origin = <...>;
> x-size = <...>;
> y-size = <...>;
> };
>
> /* a button */
> touch-area-2a {
> x-origin = <...>;
> y-origin = <...>;
> x-size = <...>;
> y-size = <...>;
> linux,code = <KEY_POWER>;
> };
>
> /* another button */
> touch-area-2b {
> x-origin = <...>;
> y-origin = <...>;
> x-size = <...>;
> y-size = <...>;
> linux,code = <KEY_INFO>;
> };
> };
>
Now that I am working on the approach you suggested, I see that some
things can get slightly more complicated. I still think that it is worth
a try, but I would like to discuss a couple of points.

The node parsing is not that simple anymore because the touch-area nodes
are only surrounded by the touchscreen node. Theoretically they could be
even be defined with other properties in between. The current approach
only needs to find the overlay-buttons parent and iterate over all the
inner nodes(simply by calling device_get_named_child_node() and
fwnode_for_each_child_node() the parsing is achieved in two lines +
error checking). So maybe even if we opt for the single-object approach,
an overlay node to group all the touch-areas could simplify the parsing.
Or did you have a different approach in mind? Your example would turn
into this one:

touchscreen {
compatible = "...";
reg = <...>;

touch-overlay {
/* raw coordinates reported here */
touch-area-1 {
x-origin = <...>;
y-origin = <...>;
x-size = <...>;
y-size = <...>;
};

/* a button */
touch-area-2a {
x-origin = <...>;
y-origin = <...>;
x-size = <...>;
y-size = <...>;
linux,code = <KEY_POWER>;
};

/* another button */
touch-area-2b {
x-origin = <...>;
y-origin = <...>;
x-size = <...>;
y-size = <...>;
linux,code = <KEY_INFO>;
};
};
};
In my opinion it looks cleaner as well because you are defining a
physical object: the overlay.

> With this method, the driver merely stores a list head. The parsing code
> then walks the client device node; for each touch* child encountered, it
> allocates memory for a structure of five members, and adds it to the list.
>
The button objects do not only store the keycode, but also the slot and
if they are pressed or not. I could allocate memory for these members as
well, but maybe an additional struct with the button-specific members
set to NULL for the touch areas with keycode = KEY_RESERVED would make
sense. I don't know if that's adding too much overhead for two members
though.

> The event handling code then simply iterates through the list and checks
> if the coordinates reported by the hardware fall within each rectangle. If
> so, and the keycode in the list element is equal to KEY_RESERVED (zero),
> we assume the rectangle is of type (1); the coordinates are passed to
> touchscreen_report_pos() and the pressure is reported as well.

There is another case to consider that might make the iteration less
optimal, but I don't think it will be critical.

A button could be defined inside an overlay-touchscreen (no keycode)
area. Given that the other way round (a touchscreen inside a button)
does not make much sense, the buttons have a higher priority.

Let's take your example: imagine that your third area
is a button inside the first one. We have to iterate through the whole
list until we are sure we checked if there are buttons in the given
position, but keeping in mind that the first object already has the
right coordinates to handle the touch event. Your approach even allows
for multiple no-key areas and we do not know if there are buttons when
we iterate (there could be none).
Therefore some iterations could be unnecessary, but this is probably an
edge case that would cost at most a couple of extra iterations compared
to a two-list approach.

I will keep on working on the next version with a single list while we
clarify these points, so maybe we can save an iteration.
> Kind regards,
> Jeff LaBundy

Best regards,
Javier Carrasco