Improving next_buffer() rpc

Christopher James Halse Rogers chris at cooperteam.net
Wed Jul 9 07:54:48 UTC 2014


On Wed, Jul 9, 2014 at 12:00 PM, Daniel van Vugt 
<daniel.van.vugt at canonical.com> wrote:
> Sounds better to just pass buffers around although I'm not keen on 
> any change that doesn't make progress on the performance bottleneck 
> LP: #1253868. The bottleneck is the swapping/exchanging approach 
> which limits the client to holding only one buffer, so I don't think 
> it's a good idea for new designs to still have that problem.

I continue to disagree that this implies that clients cannot ever be 
more than double buffered. What it means is that there's the RPC delay 
before they get the third frame.

It is something we could do to increase performance, but it's nowhere 
near as bad as ‘triple buffering doesn't work’.

> 
> In order to improve parallelism per LP: #1253868 you'd really have to 
> receive new buffers as soon as they're free, which means getting them 
> as MirEvents. Then you only need an RPC function to release them back 
> to the server:
> 
>    rpc release_buffer(Buffer) returns (Void);

I'm not averse to this approach, although I'd call this submit_buffer.

While we're at it, could we kindly let the platform decide whether the 
client can allocate buffers or not? :)

> 
> Keep in mind the inter-process communication is the bottleneck here. 
> If you allow a context switch between the server and client then 
> that's half to one millisecond (see mirping) per RPC round trip. More 
> than double that for nested servers and you see the protocol delay 
> could be a significant factor. So I think any protocol enhancement 
> should have parallelism designed in.
> 
> I also think we need to be careful about not landing any protocol 
> changes to RTM candidate series' 0.4-0.5, so the foundation for RTM 
> is maximally mature (albeit not yet optimal).

Ding!

> 
> - Daniel
> 
> 
> On 08/07/14 21:10, Kevin DuBois wrote:
>> Hello mir team,
>> 
>> In order to get the next buffer for the client, we currently have:
>> 
>> rpc next_buffer(SurfaceId) returns (Buffer);
>> 
>> which is problematic for me in working on [1] because this implicitly
>> releases the buffer from the client side, whereas in working on that
>> performance improvement, I have to send a fd back to the server. So I
>> was thinking of adding an rpc method more like:
>> 
>> rpc exchange_buffer(Buffer) returns (Buffer);
>> 
>> This would be sufficient to pass the fd fence back, and the buffer 
>> id in
>> the Buffer protocol message would be sufficient for the server to 
>> figure
>> out which surface has sent back its buffer. (given the global buffer
>> id's we're using)
>> 
>> This does not address the problem noted in:
>> https://bugs.launchpad.net/mir/+bug/1253868
>> but I think that might be better addressed by having an exchange type
>> rpc call (explicit or implicit) and negotiating/increasing how many
>> buffers the client owns somehow else.
>> 
>> This seems like something that could have diverse opinions, so I'm
>> hoping to get some input on the protocol change here first.
>> 
>> Thanks!
>> Kevin
>> 
>> [1]
>> https://blueprints.launchpad.net/ubuntu/+spec/client-1410-mir-performance 
>> item:
>> "[kdub] fencing improvements for clients add the ipc plumbing"
>> 
>> 
> 
> --
> Mir-devel mailing list
> Mir-devel at lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/mir-devel




More information about the Mir-devel mailing list