Linux Desktop Testing Project
Sivan Greenberg
sivan at ubuntu.com
Fri Jun 8 14:13:03 BST 2007
Matt Zimmerman wrote:
> I think that creating scripts based on our existing test plans would be a
> good start.
I second that, our test plans seem good enough to start with, but I'm
still curious to see what we can get from upstream. At least for the
products that have some sort of QA upstream, we are sure to benefit from
merging with their work flow. I assume doing so will allow us to test
beyond just crash/no-crash and achieve higher testing resolution.
> We can build (and initially run) this in a VM, until it's ready to deploy in
> production, at which time we should be able to arrange for Canonical to host
> it.
We probably need to provide accounts on that machine (the VM), to cater
for the scenario when I have for example written a test plan in dogtail,
used the bzr setup to put my testplan to work, but some setup in the OS
installed under the VM is not right or needs some modifications (such as
additional package installation). I'd like to be able to fix that
setup, create a snapshot for further use and go on with my test plan.
> example-content should be a good start; perhaps a good way forward would be
> to add an example-content-extra binary package which would include lots more
> for test purposes.
Hmm.. I'm thinking to scan through GNOME and other upstream products bug
trackers and collect content such specific images or media that caused
several known crash bugs (I even found one for eog and posted it) and
use this package to deliver those as a start.
> I think that we will get a lot of benefit just by being able to detect
> successful execution. For example, if we can check tha the media player
> plays a file and then exits without an error or a crash, that's great.
Also, if this is a more complex issue then just a "simple" crash or
exit, it is hard to detect a such through and automated testing cycle
and will require manual investigating and report already. After we
mature this infrastructure we could then look to refine and higher the
resolution of things we can notice in-test, and automatically test
behaviour patterns.
>
> I think it should be a manual step because false failures will be common
> when testing desktop applications.
I wonder if we could identify those cases to be able to create an
"ignore list" to eventually allow testing cycles without intervention.
>
>> It would be great to have a setup where anyone could watch the tests
>> taking place as a webstream. Then anyone could see the latest crack of
>> the day and see how testing is going :)
We should probably talk to the Fedora testing projects folks, their
either had or still have something similar to this for the hive release
of Fedora.
Sivan
More information about the ubuntu-devel
mailing list