Improving next_buffer() rpc

Kevin DuBois kevin.dubois at canonical.com
Thu Jul 10 14:23:28 UTC 2014


Okay, so resynthesizing the concerns to try to come up with a plan...

It seems the practical thing to do is first implement:
rpc exchange_buffer(Buffer) returns (Buffer)
which is by-in-large just an evolution of what we have now. The difference
that lets me proceed is that the buffer release is now more explicit. I
won't be changing the swapping algorithm in BufferQueue with this.

We can supplement this when someone wants to increase or decrease the
client buffer count with something like:
rpc request_additional_buffer(Void) returns (Buffer)
rpc release_client_buffer(Buffer) returns (Void)
so that the client can have spare buffers modelled as owned-by the client.
Example [1]

And the longer term plan to address bug 1253868
<https://bugs.launchpad.net/mir/+bug/1253868> will involve deeper changes
to the ipc and the BufferQueue, perhaps with asynchronous buffers or with
fd signalling mechanisms. These do have advantages (I esp like the fd
idea), but also need expansive restructuring.

I think this is a decent plan that is reasonably forward-looking, while
letting me get to the delayed-wait/client-fence-passing goal that gave us
some good gains when u8 was an internal client.

[1]
client calls "rpc create_surface()" //has 1 buffer to play with, A
client calls "rpc request_additional_buffer() //has 2 buffers to play with,
A+B
client calls "rpc exchange_buffer()" //has 2 buffers to play with, and
could have submitted A or B back to the server, getting C back.


On Thu, Jul 10, 2014 at 6:01 AM, Gerry Boland <gerry.boland at canonical.com>
wrote:

> On 09/07/14 16:39, Kevin Gunn wrote:
> > First
> > Not sure we're still on topic necessarily wrt changing from id's to fd's
> > do we need to conflate that with the double/triple buffering topic ?
> > let's answer this first...
> >
> > Second
> > while we're at it :) triple buffering isn't always a win. In the case of
> > small, frequent renders (as an example "8x8 pixel square follow my
> finger")
> > you'll have potentially 2 extra buffers that need their 16ms of fame on
> the
> > screen in the queue, 1 at session server, 1 at system server. Which can
> > look a little laggy. I'm willing to say in the same breath though, that
> > this may be lunatic fringe. The win for the triple buffering case is
> likely
> > more common, which is spikey render times (14+ms) amongst more normal
> > render times (9-12ms)
> > +1 on giving empty buffers back to the clients to allow them to have a
> > "queue" of empty buffers at their disposal (i'm not sure if RAOF is
> correct
> > or duflu in that its "synchronously waiting for a round trip every
> swap"...can
> > we already have an empty buffer queue on the client side ?)
>
>
> I also want to remind everyone that our default shipping configuration
> is a root mir server with a nested mir server as a client, and that
> nested mir server manages most client apps the user will be interacting
> with.
>
> Nesting will increase input latency, as now there's not just 3 buffers
> in play, but more (5 yeah?).
>
> I had thought that the double-buffering idea was to try reduce the
> number of buffers being used in the nested case. Sounds like Daniel
> isn't confident that'll work now, which is a pity.
> Thanks
> -G
>
>
>
> --
> Mir-devel mailing list
> Mir-devel at lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/mir-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/mir-devel/attachments/20140710/2ed7561c/attachment.html>


More information about the Mir-devel mailing list