Clients reading their surface position on screen

Christopher James Halse Rogers chris at cooperteam.net
Thu Jul 24 06:31:13 UTC 2014



On Thu, Jul 24, 2014 at 4:01 PM, Daniel van Vugt 
<daniel.van.vugt at canonical.com> wrote:
> Just remember to map points like the Qt interface does:
>    Point screen_coord = mir_surface_coord_to_screen(surface, 
> client_coord);
> So we are covered for arbitrary 3D transformations (don't assume 
> windows are on screen as rectangles).
> 
> That only leaves two problems which are not really problems:
>   (1) Races -- Make sure you don't move a window between getting its 
> input coordinates and synthesising an event.
>   (2) Mirroring e.g. in desktop previews where the same surface is 
> composted multiple times in the frame. Actually that's not an issue 
> because the input coordinate mapping stuff only cares about the real 
> surface location.

What is the “real” surface location? If we say that it's the 
location where input events will hit it, then it's entirely possible 
that there will be more than one such location - indeed, you can do 
roughly this with Compiz, but obviously without the input bit.

The existence of such an API means that shells must have a concept of 
the canonical location of a surface. That's quite a limitation on a 
shell.




More information about the Mir-devel mailing list