Improving next_buffer() rpc

Daniel van Vugt daniel.van.vugt at
Wed Jul 9 08:08:09 UTC 2014

Oops. I keep forgetting that the new BufferQueue disallows the 
compositor to own less than one buffer, so there would no longer be any 
benefit to double buffered clients from a more concurrent protocol :(

Maybe Kevin's suggestion is just fine then. So long as the server is 
able to figure out the surface(Id) from the Buffer struct.

On 09/07/14 15:41, Daniel van Vugt wrote:
> Note that we're working on making double-buffering the default again and
> triple the exception. In that case fixing LP: #1253868 may seem
> pointless, but it is surprisingly still relevant. Because a fully
> parallelized design would significantly speed up double buffering too...
> client swap buffers would no longer have to wait for a round-trip before
> returning and would instead be almost instant.
> On 09/07/14 10:00, Daniel van Vugt wrote:
>> Sounds better to just pass buffers around although I'm not keen on any
>> change that doesn't make progress on the performance bottleneck LP:
>> #1253868. The bottleneck is the swapping/exchanging approach which
>> limits the client to holding only one buffer, so I don't think it's a
>> good idea for new designs to still have that problem.
>> In order to improve parallelism per LP: #1253868 you'd really have to
>> receive new buffers as soon as they're free, which means getting them as
>> MirEvents. Then you only need an RPC function to release them back to
>> the server:
>>     rpc release_buffer(Buffer) returns (Void);
>> Keep in mind the inter-process communication is the bottleneck here. If
>> you allow a context switch between the server and client then that's
>> half to one millisecond (see mirping) per RPC round trip. More than
>> double that for nested servers and you see the protocol delay could be a
>> significant factor. So I think any protocol enhancement should have
>> parallelism designed in.
>> I also think we need to be careful about not landing any protocol
>> changes to RTM candidate series' 0.4-0.5, so the foundation for RTM is
>> maximally mature (albeit not yet optimal).
>> - Daniel
>> On 08/07/14 21:10, Kevin DuBois wrote:
>>> Hello mir team,
>>> In order to get the next buffer for the client, we currently have:
>>> rpc next_buffer(SurfaceId) returns (Buffer);
>>> which is problematic for me in working on [1] because this implicitly
>>> releases the buffer from the client side, whereas in working on that
>>> performance improvement, I have to send a fd back to the server. So I
>>> was thinking of adding an rpc method more like:
>>> rpc exchange_buffer(Buffer) returns (Buffer);
>>> This would be sufficient to pass the fd fence back, and the buffer id in
>>> the Buffer protocol message would be sufficient for the server to figure
>>> out which surface has sent back its buffer. (given the global buffer
>>> id's we're using)
>>> This does not address the problem noted in:
>>> but I think that might be better addressed by having an exchange type
>>> rpc call (explicit or implicit) and negotiating/increasing how many
>>> buffers the client owns somehow else.
>>> This seems like something that could have diverse opinions, so I'm
>>> hoping to get some input on the protocol change here first.
>>> Thanks!
>>> Kevin
>>> [1]
>>> item:
>>> "[kdub] fencing improvements for clients add the ipc plumbing"

More information about the Mir-devel mailing list