SFX in future Ubuntu releases
Sean Middleditch
elanthis at awesomeplay.com
Wed Jun 14 22:04:47 BST 2006
> It seems that some people here need a review of desktop technologies.
Clearly.
> * Using the Composite extension of Xorg. This solution comes from
> Fedora and is supported by nVidia. This solution know a
> conceptual security hole.
This is common misconception, stemming I think from the very poor "press
release" Red Hat put out over AIGLX.
First, neither Composite nor X.org were made by Fedora/Red Hat. Composite
and its sister X extensions were developed by Keith Packard and several
other X.org developers, most (all?) of whom are not Red Hat employees or
Fedora developers. You are possibly confusing Composite with AIGLX (they
are two *completely* different things). You'd still have the attribution
wrong, because Red Hat's work on AIGLX is pretty minimal; it mostly
consists of polish and finishing off the work that others had started.
(IBM being one of the bigger contributors to the initial work.)
So far as a security hole... there is no security hole opened by
Composite. I don't think that's what you meant, but that isn't directly
clear by your mail.
> * Using a brand new Xserver built on top of OpenGL. Called Xgl.
The only reason XGL works at all is because it uses Composite. Composite
is an extension that allows an X server to render application windows to
offscreen buffers and then allow a client compositing manager to
"composite" those buffers onto a final on-screen display. Whether you are
using stock X.org or XGL, either way you require a compositing manager
(i.e., Compiz or Metacity) and the Composite extension in the server.
Xegl is going to require EGL, which for all practical purposes doesn't
exist. Aside from actually getting the specification written, one then
needs to write massive amounts of usermode and kernelmode code before
it'll be even close to stable and user-ready. There's really only three
theoretical advantages of EGL:
1) It allows a new driver architecture that doesn't suffer from the
"security hole" you referenced regarding giving the X server full access
to your hardware.
2) It allows all 2D rendering to be done using OpenGL.
3) It requires that driver developers only write OpenGL drivers, and not
also have to write 2D drivers.
Those are all pretty much bunk:
1) Why write an all-new architecture for an all-new server when you could
just offer fixes to X.org, which requires *far* less work and change?
X.org developers are already working on this theoretical "security hole."
It's going to take some work on both their and the kernel developers'
parts, but it'll be less work than EGL would require.
2) You can do this now, theoretically. Why write a whole new X server
when you can just write a glitz backend for X.org? (Glitz is the library
used by Cairo for rendering over OpenGL, and also used by XGL to do all of
its 2D rendering.) No need for a whole new server or driver architecture.
This wasn't possible before because the X.org server couldn't directly
access the 3D hardware at the same time as client apps, but that's what
AIGLX fixed. So now a glitz driver inside of X.org is quite possible.
3) Actual driver developers disagree. One of the best arguments being
that the number of Open Source OpenGL drivers for sort-of-modern hardware
is about 2. One driver for low-end integrated Intel chips and one for
yester-year's ATI cards. Writing "only" the 3D driver isn't any easier
when the community generally isn't capable of writing a 3D driver at all
for most hardware. Assuming someone could and did write just an OpenGL
driver, that glitz backend would remove the need for a 2D component. All
that's left is the mode-setting components, which you'd need to write with
EGL anyhow.
> A Google Summer of Code project mentored by Ubuntu consists of "Poorman
> Xgl" i.e. Xgl over software rendering.
Hopefully that work is done in such a manner that it's usable by X.org
proper. I imagine that this is largely Mesa performance work, right? Got
any references to this project? I didn't find anything on it at
http://code.google.com/soc
>
> So Ubuntu might explore Xgl, but we can't drop Xorg. Our solutions are :
>
> * Xorg + metacity
> * Xorg + compiz
> * Xorg + Xgl + compiz
Also:
* Xorg + Xgl + metacity
Although, really, why would you want to add in the Xgl component if things
clearly work without it?
And before anyone brings up the "X.org 7.1 w/ AIGLX breaks NVIDIA"
argument, note that since Xglx has to run on top of X.org, once Edgy
updates to X.org 7.1, Xglx is going to be just as broken until new NVIDIA
drivers come out that support X.org 7.1 fully.
I think it's pretty clear that I'm against putting XGL into Edgy in any
official capacity. Edgy might be about new directions and eye candy, but
I'd like to think that Edgy would be about new directions with a long-term
future.
--
Sean Middleditch <sean at awesomeplay.com>
More information about the ubuntu-devel
mailing list