n-buffering and client behaviour

Kevin Gunn kevin.gunn at canonical.com
Wed Apr 24 14:50:31 UTC 2013


+1 on client has 1 buffer

first, to the ipc hit, at least on mobile...it was in the noise. for the
typical screen size/renders we're dealing with we're going to see mem bw
be the limiting factor (assuming its simple pixel pushing)

secondly, it would seem to me keeping buffer tracking on server side
would make it easier to implement transitioning in & out of double vs
triple buffering...(again an experience from the mobile world). whereas
if you let clients have more than n buffers, they might start looking at
addresses & tracking...creating problems if the server suddenly has a
3rd buffer coming in/out of the picture.

br,kg

On 04/23/2013 08:32 PM, Daniel van Vugt wrote:
> That all said, it's a gamble of GPU resources to commit to 4 or more
> buffers. There would likely be no pay-off most of the time, but
> guaranteed more megabytes of graphics memory used. Triple might be
> more sensible as a default. And if we can configure "N" somewhere in
> the system, that's a bonus.
>
>
> On 24/04/13 09:24, Daniel van Vugt wrote:
>> I think the driving case for N-buffering would be to smooth out and
>> avoid missed frame deadlines.
>>
>> If the server has 1, the client has 1 or 2, and the pipeline has 1+
>> ready for the server, then it's less likely that a delay/hiccup in the
>> client will cause a skipped frame on the actual server. Providing the
>> client can recover and catch up in a frame or two.
>>
>> Though you don't really want to push it much past 4 buffers. Because the
>> lag (mostly hardware cursor) will become visible.
>>
>>
>> On 24/04/13 01:46, Kevin DuBois wrote:
>>> On 04/22/2013 11:18 PM, Christopher James Halse Rogers wrote:
>>>> I dislike that <shift>+<enter> is send.
>>>>
>>>> On Tue, 2013-04-23 at 15:41 +1000, Christopher James Halse Rogers
>>>> wrote:
>>>>> Hey all.
>>>>>
>>>>> While I'm shepherding various Mesa patches upstream
>>>> … I'll use the time in-between review cycles to implement triple
>>>> buffering in order to implement eglSwapInteral(0) so that our
>>>> benchmarks
>>>> are less useless.
>>>>
>>>> There are two broad approaches here: the client always has exactly one
>>>> buffer, or the client library potentially has more than one buffer.
>>>>
>>>> In the former the server sends a single buffer on surface creation and
>>>> in response to each next_buffer() request, but internally keeps
>>>> n-buffers available and coordinates handing off buffers to the
>>>> compositor component and the client library. The server is responsible
>>>> for determining whether next_buffer() should block or not.
>>>>
>>>> In the latter case the server hands out two buffers on surface
>>>> creation
>>>> and a single buffer in response to next_buffer(). The client library
>>>> then determines whether next_buffer() blocks.
>>>>
>>>> The latter case allows eglSwapInterval(0) to not hit IPC for each
>>>> frame,
>>>> which will result in higher benchmark numbers, but for regular clients
>>>> the IPC overhead should not be anywhere near the same proportion of
>>>> rendering time, so IPC-per-frame might generate more realistic
>>>> numbers.
>>> I am also less concerned about ipc-per-frame because, like you, i think
>>> the rendering time (or the effect of composition bypass) will outweigh
>>> the ipc-per-frame cost.
>>>>
>>>> I'm therefore leaning towards the former approach - the client always
>>>> has exactly one buffer, and needs to round-trip to the server each
>>>> frame, even with eglSwapInterval(0).
>>>>
>>>> Thoughts?
>>>>
>>>>
>>> I know you're aware of this, but just to remind others on the list that
>>> we're talking about logical ownership, not what is actually mmap-ed at
>>> any one time, because of Android requirements.
>>>
>>> Our swapper currently implements triple buffering (although there's no
>>> option to force triple buffering, other than changing a constructor and
>>> recompiling) If its not working currently, its more a matter of fixing
>>> and adding an additional option than of implementing. This uses the
>>> model that the client always logically owns one buffer, so I think that
>>> we are in agreement that 'the former approach' is the one we like
>>> better. I don't like the idea of having the client provide some
>>> synchronization because it spreads the synchronization out across the
>>> ipc boundary. Given how tricky it can be to diagnose graphical glitches
>>> that pop up because of bad sync, having just one nugget of code in the
>>> server that provides us with sync is a great win.
>>>
>>> N buffering is interesting. :) We disable it in the code because we
>>> really haven't had a driving case that requires it. From a quick
>>> bout of
>>> thinking, I think that n buffering makes the most sense when the client
>>> /requires/ that it logically owns more than buffer. Like, if a client
>>> requires 2 buffers at the same time, we could keep a full pipeline with
>>> 1 buffer owned by the compositor, 1 in reserve, and 2 in client
>>> ownership (quad-buffering).
>>>
>>> I think that we can coordinate the client owning more than buffer
>>> without any client sync for it to work, but let's wait until we have a
>>> driving case (i'm thinking of the Mali cores, which I've heard really
>>> like this) to work through those details.
>>>
>>> Cheers,
>>> Kevin
>>>
>>
>




More information about the Mir-devel mailing list