New Buffer Semantics Planning

Daniel van Vugt daniel.van.vugt at canonical.com
Fri Jun 26 02:39:52 UTC 2015


I'm curious (but not yet concerned) about how the new plan will deal 
with the transitions we have between 2-3-4 buffers which is neatly 
self-contained in the single BufferQueue class right now. Although as 
some responsibilities clearly live on one side and not the other, maybe 
things could become conceptually simpler if we manage them carefully:

   framedropping: Always implemented in the client process as a 
non-blocking acquire. The server just receives new buffers quicker than 
usual and needs the smarts to deal with (skip) a high rate of incoming 
buffers [1].

   bypass/overlays: Always implemented in the server process, invisible 
to the client. The server just can't enable those code paths until at 
least two buffers have been received for a surface.

   client wake-up: Regardless of the model/mode in place the client 
would get woken up at the physical display rate by the server if it's 
had a buffer consumed (but not woken otherwise). More frequent wake-ups 
for framedropping are the responsibility of libmirclient itself and need 
not involve the server to do anything different.


[1] Idea: If the server skipped/dropped _all_ but the newest buffer it 
has for each surface on every composite() then that would eliminate 
buffer lag and solve the problem of how to replace dynamic double 
buffering. Client processes would still only be woken up at the display 
rate so vsync-locked animations would not speed up unnecessarily. 
Everyone wins -- minimal lag and maximal smoothness.



On 26/06/15 02:51, Kevin DuBois wrote:
> I've started spiking a bit on how to transition the system to what we've
> been calling the new buffer semantics¹, and come up with a plan. We've
> already landed the ipc plumbing, now we have to make use of it to its
> potential.
>
> Obviously, the number one thing to avoid is regressions in performance
> or functionality while transitioning. So, we'll get the new semantics up
> to par, and then switch the default from rpc exchange_buffer() to rpc
> submit_buffer().
>
> With the ability to send buffers without a client request, we're really
> turning the system around and eliminating the need for
> mc::BuffeQueue::client_acquire(). As this is one of the 4 important
> entry points to BufferQueue, it seems difficult to have BufferQueue
> service both the exchange/next_buffers swapping, and the new buffer
> semantics; especially as we have a semi-nice mc::BufferStream interface
> that we could write a new implementation for.
>
> BufferQueue's unit test is tied to mc::BufferQueue, and has some
> threading cruft from back when we had to wait more often. BufferQueue
> actually has a simple locking strategy these days... just lock at the
> top of the member function. BufferQueue's unit test is also the guard we
> have against regressions, so we shouldn't move the test in order to have
> less regressions.
>
> So, I've started writing an integration-level test that currently tests
> BufferQueue in terms of production/consumption² Instead of relying on
> threading to tease out the different patterns, the patterns are just
> specified to be ones of interest (eg, overproduction, starving the
> consumer). Its not complete (meaning, covers all the cases that
> BufferQueueTest does) yet.
>
> Once that test is roughly on-par with the BufferQueue test, we start
> writing the client-side and server-side production code for the new
> buffer system. Once that's done and tested, we'll probably have to do a
> bit of careful study using some of the latency tools Alexandros has been
> working on, and some of the benchmarks before flipping the
> new-buffer-semantics switch to "on".
>
> Sharing the plan just to see if everyone's on the same page. Not blocked
> on anything right now, but somewhere down the road, we might want to
> deprecate at least the rpc call to next_buffer() or exchange_buffer(),
> as well as come up with the client api that nested or multimedia can use
> to manage the client buffers.
>
> Thanks,
> Kevin
>
> ¹ Currently, we just let clients hold own one buffer. A few compelling
> cases (multimedia decoding, AV synchronization, nested latency
> optimizations, SET_BUFFER_COUNT from the mali driver, and lp: #1369763)
> suggest that we should let advanced clients manage their own buffers.
>
> ²
> https://code.launchpad.net/~kdub/mir/bstream-integration-test/+merge/263014
>
>



More information about the Mir-devel mailing list