On Sunday, October 10, 2021 at 11:07:43 AM UTC-7 Albrecht Schlosser wrote:
Unfortunately I can't test it. My MacBook Air doesn't generate
gesture events on the built-in touchpad (trackpad), I don't have
an external tablet device for testing, and the MacBook Air doesn't
have a touchscreen. :-(
Sorry, that statement above was an error. Testing with the built-in
touchpad works fine. :-(
I can now see the events and proceed...
Great!
One issue I noticed so far is as I wrote before: using the touchpad
with two-finger scroll gestures does no longer scroll in the FLTK
test apps with the proposed macOS patch. Mouse wheel spinning does
still scroll fine. This is because the touchpad gestures send the
event "FL_SCROLL_GESTURE" which is unknown/ignored rather than
sending "FL_MOUSEWHEEL" which would do the scrolling in, for
instance, Fl_Text_Display in test/editor.
This is something we need to find a solution for (backwards
compatibility).
Agreed.
To clarify the above statement: before we can develop cross-platform
events for multi-touch gestures we need to investigate what the
different platforms offer. AFAICT the Windows and macOS platforms
are slightly different (Linux not yet investigated).
We need a specification that can be used for all platforms. As I
wrote before, the events, types, variables and values (units,
magnitude, etc.) should be as close as possible on all supported
platforms.
So far I can imagine two generally different approaches:
(1) We define only one FL_MULTI_TOUCH[_GESTURE] (?) event (like
Microsoft does) and send different event types (pan, zoom, rotate,
...) together with some corresponding variables (maybe a struct).
The user code needs to handle only one additional event but needs to
dispatch the different types.
(2) We define one FLTK event per multi-touch event type.
This is like the early "FL_ZOOM_GESTURE" event and the proposed
patch has been done with additional events: "FL_ZOOM_GESTURE",
"FL_ROTATE_GESTURE", "FL_SCROLL_GESTURE", ... and maybe more in the
future.
Currently I tend to prefer (1), but I'd like to read about others'
opinions.
I don't see much of a reason to prefer one over the other.
If we do (2), then we'll still have to bundle enough information to let the application determine what kind of app. I see this weakly analogous to PUSH/DRAG/RELEASE separate from BUTTON 1,2,3 etc. However, there we are clearly building out a matrix of capabilities -- each event can be triggered for each button. In this case, swype, scroll, zoom, pinch, etc. seem like pretty separate events to me.
Is there an expense to having many events?
This is under investigation but links to documentation (or even code
snippets) would be appreciated.
I'll keep you updated.
For X11, It looks like there are a variety of user-space tools that do gesture recognition and then pass them on to the window manager or application. At the surface, I do not see a clear winner.
For Wayland, it seems that they use libinput https://wayland.freedesktop.org/libinput/doc/latest/gestures.html# -- but it proudly says that Wayland does not provide touch screen gestures, just touch pads. Seems like an artificial wall to me...
Rob