<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 20/03/14 23:38, Pasi Lallinaho
wrote:<br>
</div>
<blockquote cite="mid:532B7C09.9050207@shimmerproject.org"
type="cite">Hello, <br>
<br>
this is a reply to the QA recap/feedback thread. As the original
thread went off track, I decided to start a new one to discuss the
original question at hand. <br>
<br>
PACKAGE TESTING <br>
<br>
First of all, I think it was a good move to run the package
testing in groups and in cadence before we hit the beta
milestones. Running all those tests and gathering a (big) list of
bugs was and is important, especially now that we have entered the
"bug fixes only" stage of the release preparing. I am sure we
would be able to fix a lot less bugs that are annoying and affect
numerous of people. <br>
<br>
That being said, I think the amount of calls was just about
perfect for an LTS cycle. I personally think we should go through
all the groups during regular releases as well, but possibly group
more groups into one call, and relax on the amount of testing
"required". Optional tests could be literally that; run if
comfortable, but if they are left untested, that's fine as well. <br>
<br>
As to what (else) to test, I think we should try to focus on new
features, as we did this cycle. This can and probably should be
extended to running tests on applications that have had a major
update during the cycle. All of this in a flexible manner; the
more new things we have about to test, the looser running the
other tests should be. Except on the LTS releases... <br>
<br>
I've yet to decide if some of the testcases are a bit too thorough
or if they are just about right. I guess we can agree and assume
that the amount of bugs is somewhat correlating with how deep the
tests are. As I see it though, the deeper and specific the tests
are, the more mechanic running them is. Which leads us to
exploratory testing... <br>
<br>
I have a few doubtful thoughts on exploratory testing. How do we
motivate people to run exploratory testing with the development
version, while it is not ready for production, or day-to-day
environments? If the tests aren't run on/as your main system, how
can the testing be natural enough to be of exploratory nature? How
do we specify a good balance between feature and exploratory
testing? <br>
<br>
MILESTONE (ISO) TESTING <br>
<br>
It is hard to evaluate how the milestone ISO testing succeeded
because we still have one beta to go, which is also the most
important milestone. That is something where we can improve
though. <br>
<br>
The alpha releases could have been focused more on specific
issues. Now we kind of just ran through them without clear focus.
Of course this means that developers need to have their stuff
together earlier in the cycle, but that is a desirable direction
generally. <br>
<br>
I would rethink the amount of alpha releases we want to
participate in especially with non-LTS releases. We can opt-in for
as many as we did now if we have set a clear point of focus for
those. This looks unrealistic for T+1 though, as this cycle has
been really busy for everybody and we have got a lot of stuff that
was prepared in the last 2 years included. <br>
<br>
For the beta releases, we should get more publicity. We still have
the beta 2 release to come, so let's try to fix at least some of
that for Trusty. <br>
<br>
CONCLUSION <br>
<br>
To end the feedback on a positive note (though there weren't so
many negative points in total anyway), I think we have been up to
the highest possible standard with QA considering the size of our
team and the amount of new things landing this cycle. <br>
<br>
Finally, a big THANK YOU Elfy for running the QA team, doing all
the calls, reporting back to us, taking care of bugs being
noticed, features landing in time et cetera... Last but not least,
thanks for putting up with us all who have sometimes more or less
neglected our duties in QA and being unresponsive to questions and
calls. It is very much appreciated, and I totally think that 14.04
would be a lesser release without your work and persistence! <br>
<br>
Cheers, <br>
Pasi <br>
<br>
</blockquote>
Rather than post to the last mail I'll reply to this one. <br>
<br>
Thanks for the feedback by everyone - much appreciated :)<br>
<br>
So I've taken this from the comments.<br>
<br>
<blockquote><b>Testcase grouping</b> - call for more than one at a
time, I'll likely be re-organising some of them post 14.04 as
well. <br>
<b><br>
</b><b>Optional testcases</b> - can leave these for non-LTS
testing<br>
<br>
<b>New feature testing</b> - much as we did this cycle, fit them
in when we can - existing testcases to take a back seat if new
features need testing.<br>
<br>
<b>Exploratory testing</b> - I'm not looking at this any longer -
or at least, it needs to work in conjunction with autopilot
testing, there will be a mail to the list in the near future about
this from one of the other members of the QA team.<br>
<br>
<b>Specific Testing during milestones</b> - Work specific package
testing into various milestones when it's appropriate for us.
Necessarily this will need to be led by devs - they'll know more
about what needs to be tested. Only take part in milestones when
there is a need.<br>
<br>
<b>Testcase feedback</b> - I'll send a mail to the list regarding
this seperately, those that have actually taken part in package
testing - your input on this will be invaluable, please join in
with this discussion. <br>
<br>
<b>Feedback</b> to the list does help us - but it is a whole lot
easier to follow the trackers, bugs reported to those end up on
our blueprints during cycles - we can track that. Mailing list
threads - not trackable. In addition when you are reporting to a
tracker it will tell you bugs that others have reported against
that test, be it a package or an image.<br>
<br>
</blockquote>
Elfy<br>
<pre class="moz-signature" cols="72">--
Ubuntu Forum Council Member
Xubuntu QA Lead</pre>
</body>
</html>