For gestures and paths, in particular, I have been thinking about how different applications, specifications, languages describe paths. For example in svg there are paths. How do they describe them.
And in addition to markers along path what else do we need to describe movement. Timing is one. How quickly do we move along a path. There might be other attributes we need to attach to a path (randomness, curves like Bézier). And what is one wants to describe not the time it takes to cover a path but what is they want a “resolution”. Then there is what if we want a path that is made up of other paths. How is this described.
Some paths are closed in that they go back to the starting point. Talking about points we have discussed different coordinate systems or words/syntax for describe a “point“ on the screen. For example go to this point, direction and distance, location of this element (it’s center, a corner of it - thinking of a closure point.
Then the higher level of gesturers. How do you create a two or three finger path. Given all these how you you easir write, create, debug these in the natural language of Robot Framework ..
As you can see the complexity of this I would like to get to a unified and complete language of gestures and paths that tie neatly together.