New Buffer Semantics Planning
Christopher James Halse Rogers
chris at cooperteam.net
Fri Jun 26 03:39:41 UTC 2015
On Fri, Jun 26, 2015 at 12:39 PM, Daniel van Vugt
<daniel.van.vugt at canonical.com> wrote:
> I'm curious (but not yet concerned) about how the new plan will deal
> with the transitions we have between 2-3-4 buffers which is neatly
> self-contained in the single BufferQueue class right now. Although as
> some responsibilities clearly live on one side and not the other,
> maybe things could become conceptually simpler if we manage them
> carefully:
>
> framedropping: Always implemented in the client process as a
> non-blocking acquire. The server just receives new buffers quicker
> than usual and needs the smarts to deal with (skip) a high rate of
> incoming buffers [1].
Clients will need to tell the server at submit_buffer time whether or
not this buffer should replace the other buffers in the queue.
Different clients will need different behaviour here - the obvious case
being a video player that wants to dump a whole bunch of time-stamped
buffers on the compositor at once and then go to sleep for a while.
But in general, yes. The client acquires a bunch of buffers and cycles
through them.
> bypass/overlays: Always implemented in the server process,
> invisible to the client. The server just can't enable those code
> paths until at least two buffers have been received for a surface.
I don't think that's the case? Why does the server need two buffers in
order to overlay? Even with a single buffer the server always has a
buffer available¹.
It won't be entirely invisible to the client; we'll probably need to
ask the client to reallocate buffers when overlay state changes, at
least sometimes.
> client wake-up: Regardless of the model/mode in place the client
> would get woken up at the physical display rate by the server if it's
> had a buffer consumed (but not woken otherwise). More frequent
> wake-ups for framedropping are the responsibility of libmirclient
> itself and need not involve the server to do anything different.
By and large, clients will be woken up by EGL when the relevant fence
is triggered.
I don't think libmirclient will have any role in waking the client.
Unless maybe we want to mess around with
> [1] Idea: If the server skipped/dropped _all_ but the newest buffer
> it has for each surface on every composite() then that would
> eliminate buffer lag and solve the problem of how to replace dynamic
> double buffering. Client processes would still only be woken up at
> the display rate so vsync-locked animations would not speed up
> unnecessarily. Everyone wins -- minimal lag and maximal smoothness.
¹: The assumption here is that a buffer can be simultaneously scanned
out from and textured from. I *think* that's a reasonable assumption,
and in the cases where I know it doesn't apply having multiple buffers
doesn't help, because it's the buffer *format* that can only be scanned
out from, not textured from.
More information about the Mir-devel
mailing list