Congrats on karmic, looking forward lucid
bryce at canonical.com
Tue Nov 24 01:39:58 GMT 2009
On Wed, Nov 04, 2009 at 03:26:38PM +0100, Martin Pitt wrote:
> Sebastien Bacher [2009-11-02 14:45 +0100]:
> > Karmic has seen lot of technologies changes (new gdm codebase,
> > pulseaudio required by GNOME, empathy by default, devicekit-disks and
> > devicekit-power used in GNOME, etc) so it was expected it would have
> > some rough edges.
> Indeed, with so many things that changed it's a miracle that it works
> in the first place. In retrospect I think we should have announced it
> differently, as a "tech preview" release, and not recommend everyone
> to upgrade..
It seems this was a smooth release for some, and a hairy release for
others. Of about 8 systems I upgraded, I had one (my laptop) which
succumbed to rather severe issues and had to be wiped and reinstalled
from scratch, but the other systems upgraded cleanly.
That said, we are definitely seeing a surge of bug reports against X.org
in recent weeks:
(It literally went off the chart last week. I need to expand the chart...)
There were some X breakages caused by the gdm rewrite, and some issues
relating to dkms and upstart. However, this breakage was not so much
introducing new bugs, but rather making existing bugs worse. I wish
we'd had more time for testing so we could have smoothed this out better
for people, but I think we've got SRUs in for all the worst aspects of
Anyway, I *think* this increased rate of bug reports is not due to a
specific breakage or any reduction in quality in X.org but rather just
due to increased numbers of users or at least increased numbers of users
sending bug reports. That said, the increased quantity of reports poses
a workload issue that troubles me. More on this below.
One suggestion that I think might be good would be for releases like
Karmic where we feel it is a bit more ambitious technologically, to make
update-manager hold off on recommending users upgrade for a few weeks.
This would give time for SRUs to make their way through the system for
critical issues people run into. In fact, this might even be a good idea
for the LTS. Anyway, just wanted to toss out this as an idea.
> > - let's not spend too much time on backporting changes to karmic, using
> > the next 2 weeks to look at the user feedback, do stable updates to fix
> > the most annoying issues until UDS and then move our focus to Lucid
> Full ack.
Fwiw, I pretty much ended up spending 100% of my time between release
and UDS on SRU bugs (mainly for -nvidia), and still there are a number
that could be put in. But given the limited manpower at hand I'm
turning attention now to Lucid, since there is much needing to be done
> > - try to be conservative in the changes which will land in lucid, GNOME
> > upstream will likely rework some component in the GNOME3 optic and other
> > teams or upstream will probably keep working on changes to improve the
> > user experience, while the work they do is great it would probably be
> > good to wait until lucid+1 to bring those in the default installation.
> > I expect we will have some of those discussions at UDS too
> I just created the initial list of blueprints for lucid which were
> There are some new features there, but a lot of them is
> "review"/"cleanup" type of work as well. We might have to drop some
> goals, though (the low/medium ones) in favor of working on bug fixes.
I would love to be conservative with X this time through.
Unfortunately, we've gotten a number of requests for tasks which are
going to require pretty major changes to X infrastructure. HAL is to be
dropped (which may regress wacom and/or other input devices. We are
switching to KMS for radeon (which will likely incur regressions for ATI
owners). We will be rearchitecting the way -nvidia and -fglrx are
installed; this will solve many long standing issues, but introduces
risk. We are moving from -nv to -nouveau+KMS (which will fix some
longstanding issues but may introduce new ones for Nvidia owners). The
boot performance team has requested several patches which improve boot
performance but may expose regressions in less well tested areas.
Anyway, all worthwhile changes, although to call it conservative might
be a pretty big stretch. ;-)
More importantly, the manpower this work will need takes away from
stabilization work needed generally in order to get that X.org bug curve
leveled out. IOW, "Help!"
> That sounds like a very good idea. In the ancient past we had a
> "laptop testing team", perhaps we should try to build a "desktop
> testing team", where each of the members "adopts" their favourite bit
> of the desktop (be that music player handling, photo import, scanning,
> beamer support, or whatnot)?
I think this could be a good idea, but it depends on how such a team is
The way such testing has been traditionally done in Ubuntu is, someone
defines a broad area to test, then puts out a call to the community to
test out some functionality. Invariably when you test a wide breadth of
hardware, issues are discovered, and all of these are dutifully filed as
bug reports. But these are added to an already overly-full pile of
A different approach for structuring such a team, would focus instead of
having a lot of people, to have a smaller number of people who just
commit to running the same sequence of test steps say once a week.
The first time they run through the test steps sets up a baseline. Say
out of 100 steps, 50 succeed. Now, each week they repeat the steps and
measure the number of successes. Hopefully we should see the
measurements go upward. If the number drops, *then* we need bug reports
and these need to be given high priority since they represent
But the important thing is that we developers know that the tests will
be run repeatedly every week, so we have a measurable way to chart our
progress and a reliable third-party to verify the success.
> Perhaps over time we can collect a series of "desktop test cases"
> which documents which bits we really want to work, and what particular
> things to test. We once started to collect "experiences" in the wiki:
> https://wiki.ubuntu.com/DesktopTeam/Experiences , which might be a
> good basis for this.
Very much agreed! Those pages look like a good start, but I think for
testing purposes it would help to convert them into more of a repeatable
paint-by-numbers format that walk the tester through what they need to
For example, I have been writing 'test plans' for X.org along these
My hope is that the QA team will find these to be a good knowledge base
for developing their own test programs from, perhaps even automating to
I know testing is boring, and writing documentation is boring, so
writing testing documentation must be doubly boring, but I think it
could be extremely important to us for enabling testers to help us by
identifying exactly what we'd like to see tested.
I think a lot of our testing know-how is passed around word of mouth,
but by documenting it, it would open up testing to a lot more people.
<kirkland> sbeattie: around?
sbeattie: how do i simulate an apport crash?
<sbeattie> kirkland: sh -c 'kill -SEGV $$'
if apport is enabled, you should get a file in /var/crash
<kirkland> sbeattie: hmm
sbeattie: /var/crash is empty
sbeattie: hrm, apport not running
<sbeattie> kirkland: is apport enabled in /etc/default/apport ?
<kirkland> sbeattie: okay, was disabled, enabling running now
<sbeattie> kirkland: all the apport upstart/initscript does is twiddle a kernel setting, so there won't be a long-running process.
<kirkland> sbeattie: cool
Martin, if you've read this far, perhaps the above could be put
into an apport test plan. ;-)
More information about the ubuntu-desktop