The case for a "SceneGraph"
alan.griffiths at canonical.com
Thu Jul 18 10:27:00 UTC 2013
at a recent hangout I took the action to produce some notes on what a
"SceneGraph" means for Mir.
We've had frequent discussions along the lines of "this information
belongs in the SceneGraph, but as we don't have one we'll stick it
<somewhere>". This, so far mythical, SceneGraph fulfills the "model"
role in our core MVC pattern (see design/Architecture.dia in the source
tree) and incorporates functionality currently present in a number of
parts of the system.
The present SurfaceStack implementation is a primitive SceneGraph that
was "the simplest thing that might work" when it was added, but we've
worked around its limitations for long enough to know that it is
simplistic and fails to support the way we want the system to grow. The
SurfaceController was initially intended as the Controller, but I think
it is now clear that the Controller should be a part of the shell.
So, what do we want from a SceneGraph?
Looking at the architecture diagram and the code (which are in
remarkable accord) there are a number of clients to the SceneGraph:
This needs access to the buffers to be composited and corresponding meta
information about the way they are to be rendered. It doesn't happen now
(the compositor has logic to determine which buffers to render), but the
SceneGraph should be supplying exactly those buffers and meta
information that the compositor needs.
Requirement: the SceneGraph need to be able to send the buffers (and
compositing metadata) that are visible on a specified output to a
receiver within the compositor.
An implication of this is that the SceneGraph needs to be able to model
the output devices and know which surfaces are associated with each output.
The current implementation has some simple logic that informs the
compositor that changes have happened. We could make this more useful
with a cleverer SceneGraph - e.g. identifying what has changed for which
Side note: We currently implement visibility as a flag on each surface,
but it might equally be a association within the SceneGraph - what is
clear is that it should be part of the way that the SceneGraph passes
buffers for rendering, not part of the metadata.
Input needs to determine where to route events based upon the input
aware areas presented by some surfaces. The code here is undergoing a
migration from an android based input stack to something more integrated
There are two scenarios: keyboard events (which go to the current input
focus) and events that specify a point for which a corresponding input
target has to be found.
Currently the shell conspire with input to keep track of the input
target. (The shell, in its role as Controller, should control the focus,
but the focus information should be in the model - i.e. the SceneGraph.)
I'm not sure of the exact state of the mechanism for position based
input events currently but, to avoid race conditions when the SceneGraph
changes, it should be the SceneGraph that identifies candidate input
targets based on position.
As the Controller the shell should be mediating all updates to the
SceneGraph - it gets to decide the policy for surface placement,
ordering, focus changes, etc. This is best done by taking the raw
request and posting the corresponding updates to the SceneGraph.
>From the point of view of the SceneGraph we don't need to distinguish
between the Mir "shell" code and the Unity shell code (the requirements
on the SceneGraph should be the same). In practical terms a lot of
interactions should stay on the C++ side and these probably best fit in
the Mir codebase.
Surface creation: the shell needs to know when sessions create surfaces,
and have control over where they are placed. Currently the Mir
ApplicationSession receives the create_surface from frontend and has
control of the request passed to the SurfaceController: we have two
objects where we need one.
Session creation: this is currently modelled in shell.
Surface association: the shell (and possibly other clients) needs to be
able to navigate between a session and its associated surfaces. The
current SceneGraph has no concept of "session" - this relationship is
currently held within ApplicationSession: this gives ApplicationSession
a mixture of Controller and Model responsibilities.
The SceneGraph should have a concept of "session" as there are
operations it should perform on all the surfaces associated with the
DepthID, z-order, etc. The stuff I've seen implemented around these seem
doesn't leave me with a clear perception of the problem being solved -
maybe someone else can fill out some details. There clearly is a need
for an ordering to the buffers presented for rendering on each output
but, so far, I don't see the need for the complexity we have.
We need an improved Model and Controller for our SceneGraph - and that
the model should support at least three types of node: sessions,
surfaces and outputs (and understand the relationship between them.)
The representation of outputs in the SceneGraph implies that there is
more interaction with the display configuration than at present. (N.b.
any updates to the display configuration should be mediated by the
We're currently distributing knowledge of our "scene" around compositor,
shell, and input. That is going to cause synchronization issues - we
should move all this knowledge into one place.
The above is just a first, high-level, stab at outlining the
requirements - we clearly need to agree the scope and then drill down
into a lot more detail as the next piece of work.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mir-devel