Untangling EventHub/InputReader

Christopher James Halse Rogers raof at ubuntu.com
Tue Nov 12 23:57:01 UTC 2013


On Tue, 2013-11-12 at 11:51 -0800, Robert Carr wrote:
> >> I don't think it makes sense to have the uncooked events leave the
> >> device-specific code. What currently happens is that InputReader “cooks”
> >> them, but needs to call back into EventHub in order to do that.
> 
> >> Only the device-specific code really knows how to cook the events
> >> properly.
> 
> >> Also, pragmatically, libtouchpad only provides fully cooked events.
> 
> It seems like even if we start with splitting out the device there are two
> sort of directions this could go. On one hand when reading the events, they
> could be "cooked", and leave the event hub processed in a device specific
> fashion. On the other hand, EventHub could view 'Device' through a simple
> 'EventSource' interface, and this way the EventHub is modeled as the
> 'device independent code'.

I think in both cases EventHub is device independent code. The
difference is whether the device dependent code comes before, or after,
EventHub.


> 
> My intuition kind of lends to the second idea. I think the 'InputReader'
> will also depend on some other system state, for example when cooking just
> normal cursor input it depends on display state, especially if we start to
> introduce things like barriers.

I think we might have different stages of cooking here :). I want the
events leaving EventHub to be a usefully accurate representation of the
user's interaction with the input device. There are further stages of
cooking needed after that - possibly you want to scale to screen size,
then you want the shell to work out whether it needs to handle them,
then you need to identify the client to send the events to, then you
need to translate them into client space using some pretty arbitrary
projective transform.

I don't think we want the shell wants to deal with jittery touchpads,
and the EventHub *certainly* doesn't want to be dealing with projective
transforms into client space.

There are things which might mix the layers, though - you could
implement touchpad acceleration in a way that depends on knowing the
screen size, so an equal movement on the touchpad results in an equal
fraction of the screen traversed¹. And there's the possibility of
probabilistic input stuff, where the higher levels use knowledge about
clients and windows and such to help interpret ambiguous events.

I think those things are best handled as we come to them, though.

> This seems like a mismatch with the
> EventHub idea of multiplexing the event streams.
> 
> Maybe it's not pragmatically the best though? This needs to be a flexible
> interface. Both for libtouchpad, and perhaps even other input drivers
> further outside of our control in the future.

Right. We *will* have external input drivers at some point; there
inevitably turns out to be processing that needs to be done outside the
kernel. We're not going to want to add touchpad, wacom tablet, kinect,
wiimote, $RANDOM_INPUT_DEVICE code directly into Mir.

¹: I suspect this might be better implemented by doing the acceleration
first in fixed-point subpixel-accurate events and then scaling further
up in the stack; after all, this requires knowledge of where the pointer
currently is.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <https://lists.ubuntu.com/archives/mir-devel/attachments/20131113/176107df/attachment.pgp>


More information about the Mir-devel mailing list