The case for a "SceneGraph"

Robert Carr robert.carr at canonical.com
Wed Jul 24 23:34:41 UTC 2013


Sorry if there's a bit of a rough-edged tone in the mail. I feel like I
don't understand what people are talking about with using the Qt
scene-graph and it makes me feel like I don't know what I am doing. I am
just insecure, not hostile :)


On Wed, Jul 24, 2013 at 4:28 PM, Robert Carr <robert.carr at canonical.com>wrote:

> Trying to add my two cents. I agree we need this data structure and set of
> interfaces that is outlined (roughly) in the original email. It is true
> that in interface presented to the compositor, this component takes on a
> scene graph role...I think that is not the problem we are trying to solve
> at the moment though.
>
> It seems to me our most presenting problem is synchronization issues due
> to distribution of state. Essentially several parts of the system (shell,
> surfaces, input) each maintain some internal model which corresponds to a
> view of the surface stack. The lack of an authoritative source on state
> makes synchronization difficult. Likewise, it makes it unclear how to
> implement some really common patterns for the shell (i.e. simple
> notify-interfere patterns), and I think has lead to the sort of
> 'anti-pattern' where the shell has to subclass particular factory
> interfaces to view certain state.
>
> So, I think we need to rework the mir interfaces around some sort of multi
> index data structure (perhaps with some sort of transactional support) to
> fulfill these requirements. The difficult challenge is the design of the
> interfaces between this data store, and other mir components/the shell.
> Given this I expect the data structure implementation to be trivial.
>
> As for the Qt scene graph.
>
> I think it is easy to say "mir will provide these interfaces, and the Qt
> scene graph will implement them". It's difficult for me to understand what
> this means though. Are we discussing using the Qt scene graph for
> rendering? or only as a data store? Either way what sort of approach can we
> use to integrate it with the compositor and/or  renderer (accounting for
> features like, hardware overlays, multimonitor, and composition bypass). In
> doing so are we expecting performance gains? We should consider that the
> usage profile optimized for in open scene graph is likely quite different
> than the usage profile exposed by Mir. It would be surprising to me if a
> custom renderer tailored to Mir were not more performant.
>
> Another potential benefit thats discussed is ease of interopability with
> QML. I think maybe we are forgetting the whole picture here though. If the
> idea is that different parts of the mir system communicate through this set
> of interfaces implemented by the Qt scene graph, then how can we really
> allow QML to skip these interfaces and deal directly with the scene graph
> without losing our architecture for synchronization?
>
> I don't mean to raise FUD but it is unclear enough to me that I would not
> know the first step if assigned it as a task (My first question would be,
> does ms::Surface implement QQuickItem? If not, how do we avoid two copies
> of the SurfaceStack). I also think that, ease of applying graphical
> effects, is not really something holding us up at the moment, or something
> likely to be a significant drain on development resources any time soon.
>
> So anyway, I think we should focus on the data structures and interfaces
> around the surface store in Mir before spiking on using the Qt scene graph.
> I'll try and write a second email soon on thoughts im starting to develop
> about this surface store.
>
>
> On Thu, Jul 18, 2013 at 3:27 AM, Alan Griffiths <
> alan.griffiths at canonical.com> wrote:
>
>>  Hi All,
>>
>> at a recent hangout I took the action to produce some notes on what a
>> "SceneGraph" means for Mir.
>>
>> We've had frequent discussions along the lines of "this information
>> belongs in the SceneGraph, but as we don't have one we'll stick it
>> <somewhere>". This, so far mythical, SceneGraph fulfills the "model" role
>> in our core MVC pattern (see design/Architecture.dia in the source tree)
>> and incorporates functionality currently present in a number of parts of
>> the system.
>>
>> The present SurfaceStack implementation is a primitive SceneGraph that
>> was "the simplest thing that might work" when it was added, but we've
>> worked around its limitations for long enough to know that it is simplistic
>> and fails to support the way we want the system to grow. The
>> SurfaceController was initially intended as the Controller, but I think it
>> is now clear that the Controller should be a part of the shell.
>>
>> So, what do we want from a SceneGraph?
>>
>> Looking at the architecture diagram and the code (which are in remarkable
>> accord) there are a number of clients to the SceneGraph:
>>
>> *The compositor:*
>>
>> This needs access to the buffers to be composited and corresponding meta
>> information about the way they are to be rendered. It doesn't happen now
>> (the compositor has logic to determine which buffers to render), but the
>> SceneGraph should be supplying exactly those buffers and meta information
>> that the compositor needs.
>>
>> Requirement: the SceneGraph need to be able to send the buffers (and
>> compositing metadata) that are visible on a specified output to a receiver
>> within the compositor.
>>
>> An implication of this is that the SceneGraph needs to be able to model
>> the output devices and know which surfaces are associated with each output.
>>
>> The current implementation has some simple logic that informs the
>> compositor that changes have happened. We could make this more useful with
>> a cleverer SceneGraph - e.g. identifying what has changed for which output.
>>
>> Side note: We currently implement visibility as a flag on each surface,
>> but it might equally be a association within the SceneGraph - what is clear
>> is that it should be part of the way that the SceneGraph passes buffers for
>> rendering, not part of the metadata.
>>
>> *Input:*
>>
>> Input needs to determine where to route events based upon the input aware
>> areas presented by some surfaces. The code here is undergoing a migration
>> from an android based input stack to something more integrated into Mir.
>>
>> There are two scenarios: keyboard events (which go to the current input
>> focus) and events that specify a point for which a corresponding input
>> target has to be found.
>>
>> Currently the shell conspire with input to keep track of the input
>> target. (The shell, in its role as Controller, should control the focus,
>> but the focus information should be in the model - i.e. the SceneGraph.)
>>
>> I'm not sure of the exact state of the mechanism for position based input
>> events currently but, to avoid race conditions when the SceneGraph changes,
>> it should be the SceneGraph that identifies candidate input targets based
>> on position.
>>
>> *The Shell:*
>>
>> As the Controller the shell should be mediating all updates to the
>> SceneGraph - it gets to decide the policy for surface placement, ordering,
>> focus changes, etc. This is best done by taking the raw request and posting
>> the corresponding updates to the SceneGraph.
>>
>> From the point of view of the SceneGraph we don't need to distinguish
>> between the Mir "shell" code and the Unity shell code (the requirements on
>> the SceneGraph should be the same). In practical terms a lot of
>> interactions should stay on the C++ side and these probably best fit in the
>> Mir codebase.
>>
>> Some examples:
>>
>> Surface creation: the shell needs to know when sessions create surfaces,
>> and have control over where they are placed. Currently the Mir
>> ApplicationSession receives the create_surface from frontend and has
>> control of the request passed to the SurfaceController: we have two objects
>> where we need one.
>>
>> Session creation: this is currently modelled in shell.
>>
>> Surface association: the shell (and possibly other clients) needs to be
>> able to navigate between a session and its associated surfaces. The current
>> SceneGraph has no concept of "session" - this relationship is currently
>> held within ApplicationSession: this gives ApplicationSession a mixture of
>> Controller and Model responsibilities.
>>
>> The SceneGraph should have a concept of "session" as there are operations
>> it should perform on all the surfaces associated with the session.
>>
>> DepthID, z-order, etc. The stuff I've seen implemented around these seem
>> doesn't leave me with a clear perception of the problem being solved -
>> maybe someone else can fill out some details. There clearly is a need for
>> an ordering to the buffers presented for rendering on each output but, so
>> far, I don't see the need for the complexity we have.
>>
>> *Summary:*
>>
>> We need an improved Model and Controller for our SceneGraph - and that
>> the model should support at least three types of node: sessions, surfaces
>> and outputs (and understand the relationship between them.)
>>
>> The representation of outputs in the SceneGraph implies that there is
>> more interaction with the display configuration than at present. (N.b. any
>> updates to the display configuration should be mediated by the
>> Controller/shell.)
>>
>> We're currently distributing knowledge of our "scene" around compositor,
>> shell, and input. That is going to cause synchronization issues - we should
>> move all this knowledge into one place.
>>
>> The above is just a first, high-level, stab at outlining the requirements
>> - we clearly need to agree the scope and then drill down into a lot more
>> detail as the next piece of work.
>>
>> Alan
>>
>> --
>> Mir-devel mailing list
>> Mir-devel at lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/mir-devel
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/mir-devel/attachments/20130724/419a52b9/attachment-0001.html>


More information about the Mir-devel mailing list