Mir in a virtual machine
Daniel van Vugt
daniel.van.vugt at canonical.com
Tue Jan 28 01:35:14 UTC 2014
The distinction between "software" and "hardware" buffers in Mir is
confusing, I know, due to the vague names. What it actually means is:
mir_buffer_usage_software: Client draws individual pixels and the server
composites this (in hardware still). Presently glTexImage2D, I know. :/
mir_buffer_usage_hardware: Client never has access to address individual
pixels and must use OpenGL.
So if you want to draw the pixels yourself then you must use
mir_buffer_usage_software. If you feel glTexImage2D is too slow at that,
then it's an enhancement for Mir later. But not something that's visible
to the client.
To see the various implementations just search for the "bind_to_texture"
grep -r '::bind_to_texture' *
On 27/01/14 04:17, Rian Quinn wrote:
> I went back and re-read the gbm code, and noticed that it only creates
> the dumb buffer if you set the GBM_BO_USE_WRITE flag, which means that
> you can use the gbm_bo_write command which takes a char*. So that makes
> sense to me now. What I guess I was looking for, was something more like
> DirectDraw, where a client could get access to device memory, and render
> directly into for 2D.
> In my case I’m not really interested in 3D, but instead my clients are
> rendering using the VGA spec, 2D pixel data. I’m trying to figure out
> the best way to reduce the number of memcpys. At the moment, I am going
> to assume that I need to use a software buffer for the clients, and the
> server will copy the pixel data to the scanout buffer.
> That being the case, what is the best method for transferring this ARGB
> pixel data from shared RAM, onto a scanout buffer? From what I
> understand, glTexSubImage2D does a memcpy via the CPU. Ideally, I would
> prefer this transfer to be done via DMA and not the CPU. Keep in mind
> that each client is rendering as a VGA device, so I don’t really have
> control of “when” the client decides to modify it’s char*.
> After reading a lot of the compositor code in Mir, I so far have not
> been able to locate how this is done for a software buffer vs. a
> hardware buffer.
> Also, is there an example of how someone would use Mir to composite
> multiple clients. From what I can tell, all of the server examples just
> call run_mir, and don’t really do anything.
> - Rian
>> On Jan 26, 2014, at 6:10 AM, Alexandros Frantzis
>> <alexandros.frantzis at canonical.com> wrote:
>> On Sat, Jan 25, 2014 at 02:04:29PM -0800, Rian Quinn wrote:
>> At the moment I am trying to better understand how buffers are used
>> and shared in Mir. I have spent the past couple of days doing nothing
>> but reading the Mir source code, the DRM source code, and portions of
>> the Mesa source code. Here is what I think I have learned so far,
>> which will lead me to a question:
>> - The server creates the buffers. It can create either a hardware
>> buffer, which is a gbm allocated “dumb” buffer, or a software buffer,
>> which is nothing more than shared memory (in RAM).
>> Using the mir_buffer_usage_hardware flag leads to the creation of a
>> normal gbm buffer, i.e., one suited for direct usage by the GPU. This is
>> *not* a gbm "dumb" buffer.
>> You are correct that using mir_buffer_usage_software creates a shared
>> memory (RAM) buffer.
>> - When a hardware buffer is created, it uses DRM prime
>> (drmPrimeHandleToFD) to create an FD for the “dumb" buffer.
>> Correct, but as noted above, it's not a gbm "dumb" buffer, but a normal
>> gbm buffer.
>> - The server then provides the client with a “ClientBuffer” which is
>> an abstraction of the hardware / software buffer containing the
>> information about the buffer (like format, stride, etc…) and its FD.
>> - To draw into this buffer, you need to
>> call mir_surface_get_graphics_region which through some indirection,
>> mmaps the buffer to a vaddr that can be written into.
>> Not correct for hardware buffers, see below for more. This only works
>> for software buffers, in which case the fd passed to the client is a
>> shared memory fd which can be mapped. In the case of hardware buffers
>> it's a "prime" fd which in general doesn't support sensible mapping of
>> that kind.
>> If you look at the basic.c demo, it creates a hardware buffer, and if
>> you look at the fingerpaint.c demo, it creates a software buffer. If I
>> modify the basic.c demo to call mir_surface_get_graphics_region, it
>> fails on VMWare, saying that it could not mmap the buffer. It works
>> fine if I change the basic.c to use a software buffer.
>> Is this an issue with VMWare? Or am I fundamentally not understanding
>> something about how hardware buffer’s are used? If I am… why would a
>> client use hardware buffers if it cannot map the buffer to use it?
>> In general, buffers created for direct GPU usage cannot be reasonably
>> mmap-ed and their pixels accessed directly. Even if the mapping
>> operation itself is supported by the driver, the contents of the mapped
>> area are usually not laid out in memory in a linear fashion (i.e. they
>> have some sort of tiling), and therefore not useful for pixel
>> The only way to draw into a hardware buffer in Mir is to go through one
>> of the "accelerated" APIs supported by Mesa EGL (e.g. OpenGL). The
>> example at examples/scroll.c shows how to do that.
>> Bottom line: the failure to mmap is expected for hardware buffers, it's
>> not a VMWare issue; the mir_surface_get_graphics_region call is only
>> meaningful for Mir software buffers.
More information about the Mir-devel