<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello Sebastien. Great questions; let me give you some answers!<br>
<blockquote type="cite">- do you have any idea of what applications
you would like to see tested?<br>
</blockquote>
Yes, I am targetting the default installed desktop applications.
However, this is in no way limited to the default applications.
Ultimately it would be awesome to see testcases for all of the
"popular" applications people like to use on ubuntu.<br>
<blockquote type="cite"> - who will run those tests and when?<br>
</blockquote>
These tests will be run during the beta1 and beta2 cycles. That's
this week, and the last week in March for beta2. The tests will be
run by "normal" users; varying from our normal set of testers
(awesome work you guys do, thank you!) who do iso and sru testing,
as well as those folks who just want to try out the new release and
are the more casual tester/user.<br>
<blockquote type="cite"> - who will deal with the feedback, when and
in which way?</blockquote>
Right now I plan on gathering the feedback and submitting it for
everyone to publicly see. All of the results will be on the results
tracker in launchpad so they can be seen as they are submitted by
anyone. I plan to share aggregate details on my blog, and hopefully
a public facing website showing the top testers, etc. In addition, I
am happy to share additional details with anyone upon request :-)<br>
<br>
As far as your concerns about sorting thru all the testing and
getting good bug reports, etc, you are concerns are valid. This
approach doesn't scale to thousands of tests and users, but for the
moment the volume will be low enough to allow manual processing.
Even if thousands of cases are submitted, we can look at the
aggregate data simple enough and get useful information out of it.
The testers themselves will be encouraged to submit bug reports, but
the aggregated data will be collected by me and I will share with
the upstreams involved to ensure any anomalies are researched and
potential bugs filed if found. <br>
<br>
Finally I will add that this entire testing loop is still a work in
progress. We want to refine this loop and the tools we use as we go
into next cycle. Expect to see some sessions at UDS around this
topic. I hope to get some feedback and ideas we can use to help
shape the Q cycle testing.<br>
<br>
Thanks!<br>
<br>
Nicholas<br>
<br>
On 02/24/2012 05:24 AM, Sebastien Bacher wrote:
<blockquote cite="mid:4F476574.7070107@ubuntu.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
Le 24/02/2012 01:33, Nicholas Skaggs a écrit :
<blockquote cite="mid:4F46DAE5.1050302@canonical.com" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
As part of the precise cycle, the ubuntu QA team has been
looking to increase manual application testing. As part of this,
I have extended checkbox to serve up manual tests to testers to
test ubuntu applications post installation. We need your help!
If your an application developer who wants testing on his
application I would like your testcases included in the checkbox
application tests for beta1. <br>
</blockquote>
Hey,<br>
<br>
Thanks for that, it's always great to have people looking at
improving the desktop ;-)<br>
<br>
I've some question though:<br>
<br>
- do you have any idea of what applications you would like to see
tested?<br>
- who will run those tests and when?<br>
- who will deal with the feedback, when and in which way?<br>
<br>
Having things tested is great but I think we should figure how we
deal with the feedback before starting doing lot of testing this
way. <br>
<br>
I've been working a bit with unity-checkbox to help Didier in the
previous unity update round, and dealing with the infos collected
is quite some work. It's useful for unity where we are upstream
and have resources to deal with the issues raised, I'm less sure
we can do an useful job of it on the application with our current
structure and "workforce"... we don't have the people to do
upstream work and to be fair we already know about quite some
issues that we should fix and didn't yet through bug report.<br>
<br>
Could you picture how you would see the feedback loop work? Would
the QA team read those reports and turn issues in bugs for those
which are not already known? Or...?<br>
<br>
Cheers,<br>
Sebastien Bacher<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
</blockquote>
<br>
</body>
</html>