Mir on vmwgfx
thellstrom at vmware.com
Tue Nov 5 14:17:49 UTC 2013
On 11/05/2013 02:55 PM, Alexandros Frantzis wrote:
> On Tue, Nov 05, 2013 at 03:36:52AM -0800, Jakob Bornecrantz wrote:
> Hi Jakob!
>>> Note that the Mesa codebase we are using has some changes in the GBM code
>>> (experimental, not upstream yet). Notably:
>>> * we allow creation of "dumb" drm buffers of arbitrary size (not just
>>> 64x64) when using GBM_BO_USE_WRITE
>> There is no technical limit on this.
> Good to hear.
>>> * gbm buffers backed by a "dumb" DRM buffer also get a DRIimage
>> This will be a problem, at least to my knowledge DRIimages are backed
>> by a gallium resource/texture, in SVGA this is backed by a surface,
>> while dumb drm buffers would be backed by a dma-buffer (I think as
>> of right now vmwgfx does not support the dumb interface).
>> Taking a step back here, SVGA have two types of resources: a surface
>> which is in simplified terms a opaque handle to a GL texture on the
>> host side, which can not be mapped; and dma-buffers which are regions
>> of memory both the guest and the host and used for transferring data
>> to and from surfaces.
>>> We have found that the major hardware drivers support these changes.
>>> Do they pose a problem for the VMware driver?
>> See above, and adding too that, if you are doing software a approach we
>> would like the storage to be a dma-buffer and not to be backed by a surface
>> (to avoid unnecessary data transfers and resource overhead).
> The main use case for this is to allow direct surface pixel access for
> clients that either can't, or prefer not to use GL to render, while
> still keeping compositing performant.
> In an early Mir server implementation, when the server had to deal with
> "dumb" buffers it just mmap-ed the pixel data and uploaded them to a
> texture with glTexImage2D(), so that the compositor could use them.
> However, it turned out that this was very slow. To speed up things, we
> added a backing DRIimage to the "dumb" gbm buffer, so that we could use
> glEGLImageTargetTexture2DOES() with it to populate the texture (like we
> do for non-dumb buffers), which is significantly faster.
I still don't quite understand how you get the pixel data in the dumb
buffer to the DRIimage?
On the glTexImage2D() approach, did you try ordinary shmem sharing
rather than a dumb GBM buffer?
It might have been that the GBM buffer was residing in uncached memory,
which makes reading painfully slow.
> If this is just a matter of reduced performance in the VMware driver for
> this use case, then perhaps we should wait to see if it's actually a
> problem before adding a special case for it in Mir. On the other hand,
> if it is a matter of complicating the VMware driver excessively, we can
> try to find a way to accommodate this elegantly in Mir. Would the
> first approach (mmap-ing the dumb buffer and using glTexImage2D()) be
> a better match for the VMware drivers?
In this case glTexImage2D() might be better since GBM dumb buffers
reside in cached memory. However,
it would be desirable only to copy the data that has been dirtied by the
client. Our X server driver uses this approach.
More information about the Mir-devel