Mir on vmwgfx
alexandros.frantzis at canonical.com
Tue Nov 5 10:04:06 UTC 2013
On Tue, Nov 05, 2013 at 08:22:28AM +0100, Thomas Hellstrom wrote:
> I'm new to this list and I'm trying to get Mir running in a VMware
> virtual machine on top of the vmwgfx driver stack.
> The idea is to first get "mir_demo_server_basic" running with demo
> clients and then move on to Xmir, patching up our drivers as
Hi Thomas, thanks for looking into this. Feel free to also join
#ubuntu-mir on freenode if you need more direct information.
> So far, I've encountered a couple of issues that might need
> attention from MIR developers:
> 1) function mggh::DRMHelper::is_appropriate_device() in
> gbm_display_helpers.c checks whether a drm device has any children
> except itself. This is not true for vmwgfx, and the server will fail
> to start thinking that our drm device is not appropriate. Why the
> child requirement?
Will take a deeper look, probably an arbitrary requirement based
on what major hardware drivers expose.
> 2) Once I get the basic server to start, the cursor disappears as
> soon as I move the mouse. This boils down to Mir thinking that the
> cursor is outside of the current mode bounding box. At Mir server
> startup, there is no KMS setup configured, hence
> DisplayConfigurationOutput::current_mode_index will be set to max
> (or -1) in mgg::RealKMSDisplayConfiguration::add_or_update_output().
> The value of DisplayConfigurationOutput::current_mode_index then
> doesn't seem to change even when Mir sets a display configuration,
> and when the mode bounding box is calculated, an out of bounds array
> access is performed.
Will take a deeper look.
> 3) Minor thing: The "Virtual" connector type is not recognized by
> Mir. (actually it's not in xf86drmMode.h either, I'll see if I can
> fix that up), but it's in the kernel user-space api file
> "drm_mode.h" and is right after the "eDP" connector type. Should be
> added in connector_type_name() in real_kms_output.cpp
> 4) vmwgfx does not yet implement the drm "Prime" mechanism for
> sharing of dma buffers, which Mir relies on. I'm about to implement
> However, it seems like Mir is using dma buffers in an illegal way:
> 1) Mir creates a GBM bufffer.
> 2) Mir uses Prime to export a dma_buf handle which it shares with
> its clients.
> 3) The client imports the dma_buf handle and uses drm to turn it
> into a drm buffer handle.
> 4) The buffer handle is typecast to a "dumb" buffer handle, and then
> mmap'ed. in struct GBMMemoryRegion : mcl::MemoryRegion.
> It's illegal to typecast a GBM buffer to a dumb buffer in this way.
> It accidently happens to work on the major driver because deep
> inside, both a GBM buffer and a dumb buffer is represented by a GEM
> buffer object. With vmwgfx that's not the case either for a GBM
> buffer or a dumb buffer, and they are different objects.
This code path (i.e. mmap-ing the buffer on the client side) is only
valid when the client has requested a "software" buffer, which on the
server side leads to the creation of a "dumb" DRM buffer (i.e., with
DRM_IOCTL_MODE_CREATE_DUMB). Although mmaping non-dumb buffers doesn't
fail per se with the major hardware drivers, the returned pixel data
usually has a non-linear layout (e.g. some sort of tiling), so it's not
really usable for our purpose.
Note that the Mesa codebase we are using has some changes in the GBM code
(experimental, not upstream yet). Notably:
* we allow creation of "dumb" drm buffers of arbitrary size (not just 64x64)
when using GBM_BO_USE_WRITE
* gbm buffers backed by a "dumb" DRM buffer also get a DRIimage
We have found that the major hardware drivers support these changes.
Do they pose a problem for the VMware driver?
See http://github.com/RAOF/mesa/ for the changed code base. It's still
based on the Mesa version shipped in Ubuntu 13.10, but we plan to rebase
on a more recent version.
More information about the Mir-devel