Improving next_buffer() rpc

Daniel van Vugt daniel.van.vugt at canonical.com
Wed Jul 9 09:35:51 UTC 2014


Forgive me for rambling on but I just had an important realisation...

Our current desire to get back to double buffering is only because the 
Mir protocol is synchronously waiting for a round trip every swap, and 
somehow I thought that the buffer queue length affected time spent in 
the ready_to_composite state. Now I'm not so sure that's true.

If we changed the protocol to implement parallelism, then in theory, 
keeping triple buffering with a fancy zero-latency swap buffers should 
perform better than the current protocol that has to wait for a round trip.

I cannot remember why I thought the length of the buffer queue affected 
the time from client-rendering to server-compositing. Perhaps we really 
do need to keep triple-buffering always-on so that the performance gain 
of a zero-latency client swap-buffers can be achieved...

In summary, I'm back to thinking any protocol change from next_buffer() 
needs to support parallelism and not be so synchronous.

- Daniel


On 09/07/14 16:08, Daniel van Vugt wrote:
> Oops. I keep forgetting that the new BufferQueue disallows the
> compositor to own less than one buffer, so there would no longer be any
> benefit to double buffered clients from a more concurrent protocol :(
>
> Maybe Kevin's suggestion is just fine then. So long as the server is
> able to figure out the surface(Id) from the Buffer struct.
>
>
> On 09/07/14 15:41, Daniel van Vugt wrote:
>> Note that we're working on making double-buffering the default again and
>> triple the exception. In that case fixing LP: #1253868 may seem
>> pointless, but it is surprisingly still relevant. Because a fully
>> parallelized design would significantly speed up double buffering too...
>> client swap buffers would no longer have to wait for a round-trip before
>> returning and would instead be almost instant.
>>
>>
>> On 09/07/14 10:00, Daniel van Vugt wrote:
>>> Sounds better to just pass buffers around although I'm not keen on any
>>> change that doesn't make progress on the performance bottleneck LP:
>>> #1253868. The bottleneck is the swapping/exchanging approach which
>>> limits the client to holding only one buffer, so I don't think it's a
>>> good idea for new designs to still have that problem.
>>>
>>> In order to improve parallelism per LP: #1253868 you'd really have to
>>> receive new buffers as soon as they're free, which means getting them as
>>> MirEvents. Then you only need an RPC function to release them back to
>>> the server:
>>>
>>>     rpc release_buffer(Buffer) returns (Void);
>>>
>>> Keep in mind the inter-process communication is the bottleneck here. If
>>> you allow a context switch between the server and client then that's
>>> half to one millisecond (see mirping) per RPC round trip. More than
>>> double that for nested servers and you see the protocol delay could be a
>>> significant factor. So I think any protocol enhancement should have
>>> parallelism designed in.
>>>
>>> I also think we need to be careful about not landing any protocol
>>> changes to RTM candidate series' 0.4-0.5, so the foundation for RTM is
>>> maximally mature (albeit not yet optimal).
>>>
>>> - Daniel
>>>
>>>
>>> On 08/07/14 21:10, Kevin DuBois wrote:
>>>> Hello mir team,
>>>>
>>>> In order to get the next buffer for the client, we currently have:
>>>>
>>>> rpc next_buffer(SurfaceId) returns (Buffer);
>>>>
>>>> which is problematic for me in working on [1] because this implicitly
>>>> releases the buffer from the client side, whereas in working on that
>>>> performance improvement, I have to send a fd back to the server. So I
>>>> was thinking of adding an rpc method more like:
>>>>
>>>> rpc exchange_buffer(Buffer) returns (Buffer);
>>>>
>>>> This would be sufficient to pass the fd fence back, and the buffer
>>>> id in
>>>> the Buffer protocol message would be sufficient for the server to
>>>> figure
>>>> out which surface has sent back its buffer. (given the global buffer
>>>> id's we're using)
>>>>
>>>> This does not address the problem noted in:
>>>> https://bugs.launchpad.net/mir/+bug/1253868
>>>> but I think that might be better addressed by having an exchange type
>>>> rpc call (explicit or implicit) and negotiating/increasing how many
>>>> buffers the client owns somehow else.
>>>>
>>>> This seems like something that could have diverse opinions, so I'm
>>>> hoping to get some input on the protocol change here first.
>>>>
>>>> Thanks!
>>>> Kevin
>>>>
>>>> [1]
>>>> https://blueprints.launchpad.net/ubuntu/+spec/client-1410-mir-performance
>>>>
>>>>
>>>> item:
>>>> "[kdub] fencing improvements for clients add the ipc plumbing"
>>>>
>>>>
>>>
>>
>



More information about the Mir-devel mailing list