screenshotting, screencasting & unity8+mir

Thomas Voß thomas.voss at
Tue Nov 26 13:15:52 UTC 2013

On Tue, Nov 26, 2013 at 1:31 PM, Daniel d'Andrada
<daniel.dandrada at> wrote:
> Hi,
> On November 26th I (Unity8), Kevin Gunn, Chris Gagnon (Autopilot) and
> Alexandros Frantzis (Mir) had a meeting on the requirements and
> implementation of screenshotting and screencasting.
> Chris told us that what autopilot really wants is screencasting (not screen
> shots) as it records all that happens during a test case and publishes the
> resulting video in case of failure.
> Kevin came with the point that application developers already want
> screenshotting and screencasting and therefore the solution should also
> cater to them (as opposed to having a solution that works for autopilot now
> and for other/third-party applications to be done only later).
> Thus considering those requirements and the future mir architecture (using
> qt scenegraph), that's the implementation we agreed upon:
> Unity8 would provide a D-Bus service for clients (applications, autopilot)
> to request for a screencast or screenshot. In case of a screenshot, the
> requestor would just provide a filename (or directory) where the
> screenshot(s) should be placed and unity8 would do it. For screencasts
> unity8 would provide a shared memory area to the requestor that would be
> kept up to date with the contents of the screen. Access to that shared
> memory area would be controlled with a semaphore and a mutex. So the problem
> of handling codecs, recording and audio stream to go along with it, etc is
> left to the user (i.e. an actual screencasting application or tool).

Hmmm, how does the shared memory approach relate to HW accelerated
codecs? Did we investigate into libstagefright to check for the
codec's requirements?

> D-Bus was chosen because access control to the screenshot/screencast feature
> can be easily implemented using D-Bus security policies. According to
> Alexandros, Mir probably won't have to be modified for
> screen[shotting|casting]. As a QQuickWindow will be controlling the
> composition now (qt-scenegraph approach), what would have to be done is
> calling glReadPixels right after the scene graph is rendered, which is
> essentially how "QImage QQuickWindow::grabWindow()" is implemented (if I
> understood its code correctly).

How would HW compositors be handled? Are we sure that we can execute a
glReadPixels at the right moment?

I think we should model screencasting and screenshotting as one
potential "post-process" step, right after the compositor has finished
one pass. That very interface could then be made available to
command-line tools via DBus.

One other scenario that might of interest to the list of use-cases:
Ability to screencast a complete boot-up | greeter | session |
shutdown video. If only Unity8 knows about
screenshotting/screencasting, we cannot solve that use case.



> So that's where I'm heading with the implementation (didn't start yet). If
> someone has a better idea, sees a problem with this approach or otherwise
> has something to add, please advise.
> Feedback already received so far:
> Thomi Richards:
>  - Clarified that autopilot also wants screenshots.
>  - Asked for a command line tool that does all the work of talking to that
> proposed screencasting API and outputs an ogv.
> Michał Sawicz:
>  - "I'm not sold on the idea that it's unity8's / unity-mir's responsibility
> to provide that interface, though, as the ability to do those feels like a
> very generic need that should be built into the display server itself. Maybe
> the fact that we might be using Qt's scenegraph changes something here,
> though - but maybe that just means there's a new project growing on
> unity-mir's side?"
> --
> Mir-devel mailing list
> Mir-devel at
> Modify settings or unsubscribe at:

More information about the Mir-devel mailing list