Linux Desktop Testing Project

Henrik Nilsen Omma henrik at ubuntu.com
Thu Jun 7 23:57:53 BST 2007


Sivan Greenberg wrote:
> Looks like indeed the right tool for the job, given I'm replying on this 
> a bit late, has there any efforts made already in combining for example 
> [1] with something like dogtail? I couldn't find anywhere on the wiki 
> were it was defined exactly what kind of test each desktop product 
> should pass to certify as "TESTED" but we can most probably just program 
> upstream's test suites per each product using dogtail to start an 
> automatic desktop testing machine.
>   

Just played around a bit with dogtail. Seems to do what we need for 
desktop testing, at least to start with. The big challenges IMO are to 
define the right tests, implement them and then analyse the results.

I'm very interested to get a pilot testing project up and running during 
the gutsy cycle. We should get a machine set up somewhere that would 
just continuously run tests. It would run through a list of test 
scripts, then run the update manager and then start from the top.

I think the main challenges at this point are:

 * Writing sensible test scripts for all the applications we care about. 
Some exist, but are these realistic enough to do sensible testing?

 * Where to run it? Presumably an efficient test setup needs one or more 
dedicated machines.  We would commit test scripts to a bzr repository 
that the machine would regularly pull from.

 * Sample data - to do realistic testing we need a large collection of 
sample date for all sorts of applications.

 * Analysing the test output - Probably the biggest challenge. Editing a 
gedit file and then comparing it with a known correct file is one thing, 
but what kind of output can we expect from a media player. Should we 
read the interface via AT-SPI at regular intervals and use that for 
output? What happens when the interface changes and we get a false 
failure? How quickly can we be expected to modify the script to work 
again and how big is the burden of doing these changes?

 * Filing bugs - when a test fails to produce the right output, how is 
hat information brought into our normal bug workflow? Does it start on a 
webpage and/or email notifications from which we manually write bugs or 
do we want a more automated procedure (there is a danger of getting too 
many false bugs). What's the workload of doing it manually?

It would be great to have a setup where anyone could watch the tests 
taking place as a webstream. Then anyone could see the latest crack of 
the day and see how testing is going :)

If anyone in the dev community is interested in working on any of these 
aspects, please get in touch.

Henrik







More information about the ubuntu-devel mailing list