Linux Desktop Testing Project

Matt Zimmerman mdz at ubuntu.com
Fri Jun 8 11:09:46 BST 2007


On Fri, Jun 08, 2007 at 12:57:53AM +0200, Henrik Nilsen Omma wrote:
> I think the main challenges at this point are:
> 
>  * Writing sensible test scripts for all the applications we care about. 
> Some exist, but are these realistic enough to do sensible testing?

I think that creating scripts based on our existing test plans would be a
good start.

>  * Where to run it? Presumably an efficient test setup needs one or more 
> dedicated machines.  We would commit test scripts to a bzr repository 
> that the machine would regularly pull from.

We can build (and initially run) this in a VM, until it's ready to deploy in
production, at which time we should be able to arrange for Canonical to host
it.

>  * Sample data - to do realistic testing we need a large collection of 
> sample date for all sorts of applications.

example-content should be a good start; perhaps a good way forward would be
to add an example-content-extra binary package which would include lots more
for test purposes.

>  * Analysing the test output - Probably the biggest challenge. Editing a 
> gedit file and then comparing it with a known correct file is one thing, 
> but what kind of output can we expect from a media player. Should we 
> read the interface via AT-SPI at regular intervals and use that for 
> output? What happens when the interface changes and we get a false 
> failure? How quickly can we be expected to modify the script to work 
> again and how big is the burden of doing these changes?

I think that we will get a lot of benefit just by being able to detect
successful execution.  For example, if we can check tha the media player
plays a file and then exits without an error or a crash, that's great.

We should run the tests frequently enough that we notice immediately when
something breaks, and don't accumulate a lot of breakage, so that fixing the
tests is a small matter.

>  * Filing bugs - when a test fails to produce the right output, how is 
> hat information brought into our normal bug workflow? Does it start on a 
> webpage and/or email notifications from which we manually write bugs or 
> do we want a more automated procedure (there is a danger of getting too 
> many false bugs). What's the workload of doing it manually?

Test failures should be verified manually, and then a bug filed if the
problem can be confirmed.  We can then optimize the process of filing the
bug so that it is very little effort for the person verifying the bug, but
I think it should be a manual step because false failures will be common
when testing desktop applications.

> It would be great to have a setup where anyone could watch the tests 
> taking place as a webstream. Then anyone could see the latest crack of 
> the day and see how testing is going :)

The test results should definitely be published live.

-- 
 - mdz



More information about the ubuntu-devel mailing list