Subsurface support, or delegated compositing

Daniel van Vugt daniel.van.vugt at canonical.com
Mon Nov 25 07:00:26 UTC 2013


(a) What's the use-case for needing to synchronize parent/child 
rendering? I'm thinking most use-cases don't need synchronization 
between the two clients. The server already ensures there's always being 
a buffer to render (without blocking) and without tearing.

(b) Why wouldn't we just deliver input to whatever is on top and 
handling input? Any events not handled by the child/subsurface can work 
down the stack (to the parent and beyond). Although I don't think we 
have the client API ability yet to designate an "input area" or to tell 
the server to replay events to the lower level.


On 25/11/13 14:51, Christopher James Halse Rogers wrote:
> One of the architectural things that I want to get done at the sprint
> next week is a solid idea of how we want to do nested
> compositing/out-of-process plugins/subsurfaces - which all seem to me to
> be aspects of the same problem.
>
> In order to prime the discussion - and to invite outside contributions -
> I thought I'd lay out the usecases as I see them before we get to
> London.
>
> There are two conceptual use-cases here -
> 1) “I want to delegate some of my UI to a third party”, and
> 2) “I need to do some compositing, and want to do this efficiently”
>
> A Unity8 session running under unity-system-compositor falls under (2).
>
> A video player playing a YUV stream that might also want to throw some
> RGB-rendered UI over it is also (2); a video player that has some chrome
> around a video widget is (1) and (2).
>
> The “embed bits of other applications in our window” requested on
> https://bugs.launchpad.net/mir/+bug/1230091 is firmly in (2).
>
> As I see it, there are also two classes of problem:
> a) How is the rendering loop coordinated between parent and child - does
> the parent need to swap each time the child has new rendering, or can
> the child swap independently, or are both modes available? What happens
> if a child also wishes to embed children of its own?
>
> This is the only concern for the type (2) use-case.
>
> b) How is input handled? Does the parent need to proxy input events and
> forward them on? How does enter/leave notification work for the parent
> and child? Can the child return events to the parent? Etc.
>
> This is necessary for the type (1) use-cases, and also seems to be the
> hairy bit.
>
> Weston (but not yet Wayland) currently partially solves (2) with the
> subsurfaces protocol, which has chosen the “no child rendering appears
> until the parent swaps” approach, and doesn't handle out of process
> renderers at all.
>
> For full out-of-process rendering, and for type-1 usecases, the my
> understanding of the current state of the art is that the parent should
> become a Wayland compositor itself. This seems a bit of a co-out to me,
> and doesn't really solve case 2; however, this area is gnarly, so it
> might prove to be the best solution.
>
> For X, the relevant prior-art is XEMBED¹, in all of its
> map-an-invisible-1x1-window-to-work-around-X11's-focus-model glory.
>
> ¹: http://standards.freedesktop.org/xembed-spec/xembed-spec-latest.html
>
>
>



More information about the Mir-devel mailing list