Mir in a virtual machine

Thomas Hellstrom thellstrom at vmware.com
Sun Jan 26 12:13:52 UTC 2014

On 01/26/2014 12:10 PM, Alexandros Frantzis wrote:
> On Sat, Jan 25, 2014 at 02:04:29PM -0800, Rian Quinn wrote:
>> At the moment I am trying to better understand how buffers are used
>> and shared in Mir. I have spent the past couple of days doing nothing
>> but reading the Mir source code, the DRM source code, and portions of
>> the Mesa source code. Here is what I think I have learned so far,
>> which will lead me to a question:
>> - The server creates the buffers. It can create either a hardware
>> buffer, which is a gbm allocated “dumb” buffer, or a software buffer,
>> which is nothing more than shared memory (in RAM). 
> Using the mir_buffer_usage_hardware flag leads to the creation of a
> normal gbm buffer, i.e., one suited for direct usage by the GPU. This is
> *not* a gbm "dumb" buffer.
> You are correct that using mir_buffer_usage_software creates a shared
> memory (RAM) buffer.
>> - When a hardware buffer is created, it uses DRM prime
>> (drmPrimeHandleToFD) to create an FD for the “dumb" buffer. 
> Correct, but as noted above, it's not a gbm "dumb" buffer, but a normal
> gbm buffer.
>> - The server then provides the client with a “ClientBuffer” which is
>> an abstraction of the hardware / software buffer containing the
>> information about the buffer (like format, stride, etc…) and its FD.
>> - To draw into this buffer, you need to
>> call mir_surface_get_graphics_region which through some indirection,
>> mmaps the buffer to a vaddr that can be written into. 
> Not correct for hardware buffers, see below for more. This only works
> for software buffers, in which case the fd passed to the client is a
> shared memory fd which can be mapped. In the case of hardware buffers
> it's a "prime" fd which in general doesn't support sensible mapping of
> that kind.
>> If you look at the basic.c demo, it creates a hardware buffer, and if
>> you look at the fingerpaint.c demo, it creates a software buffer. If I
>> modify the basic.c demo to call mir_surface_get_graphics_region, it
>> fails on VMWare, saying that it could not mmap the buffer. It works
>> fine if I change the basic.c to use a software buffer. 
>> Is this an issue with VMWare? Or am I fundamentally not understanding
>> something about how hardware buffer’s are used? If I am… why would a
>> client use hardware buffers if it cannot map the buffer to use it?
> In general, buffers created for direct GPU usage cannot be reasonably
> mmap-ed and their pixels accessed directly. Even if the mapping
> operation itself is supported by the driver, the contents of the mapped
> area are usually not laid out in memory in a linear fashion (i.e. they
> have some sort of tiling), and therefore not useful for pixel
> manipulation.
> The only way to draw into a hardware buffer in Mir is to go through one
> of the "accelerated" APIs supported by Mesa EGL (e.g. OpenGL). The
> example at examples/scroll.c shows how to do that.
> Bottom line: the failure to mmap is expected for hardware buffers, it's
> not a VMWare issue; the mir_surface_get_graphics_region call is only
> meaningful for Mir software buffers.
> Thanks,
> Alexandros

I guess the bottom line is, that if you share accelerated buffers (which
you probably want for performance reasons), and you need to draw using
CPU to these buffers, you need to use an accelerated API (GL, GLES?) and
use whatever functionality is present depending on whether the buffer is
represented as a texture or a renderbuffer: tex(Sub)Image, drawPixels etc.


More information about the Mir-devel mailing list