Test Cases categories

Gema Gomez gema.gomez-solano at canonical.com
Thu Dec 8 22:50:44 UTC 2011


On 08/12/11 21:09, Alex Lourie wrote:
> On Thu, Dec 8, 2011 at 7:57 PM, Gema Gomez
> <gema.gomez-solano at canonical.com
> <mailto:gema.gomez-solano at canonical.com>> wrote:
> 
>     On 08/12/11 15:06, Alex Lourie wrote:
>     > Hi all
>     >
>     > Proceeding with the work we started for test case rewriting,
>     there's an
>     > issue I'd like to discuss here - categorising the test cases. How
>     would
>     > we like it to be? What categories would you think should be
>     created? How
>     > do we decided the relation of a test case to a specific category? Can
>     > any given test be part of more than one categories?
>     >
>     > Please share your thoughts,
>     > Thanks.
>     >
>     > --
>     > Alex Lourie
>     >
>     >
> 
>     The categorization we have at the moment is:
> 
>     - Applications
>     - System
>     - Hardware
>     - Install
>     - Upgrade
>     - CasesMods (not sure what this even means)
> 
>     There are many ways to categorize test cases:
> 
>     - by functionality under test (like we are sort of doing, but not quite)
> 
>     - by test type
>            * positive/negative
>            * smoke: target the system horizontally and superficially /
>     regression:
>     target vertical slices of the system, in depth
>            * Unit testing (target an api method, or a very small
>     functionality)/Integration testing (target the integration of two or
>     more subsystems)/System testing (target the system as a whole)
>            * Functional (target functionality, the system behaves as it
>     should and
>     fails gracefully in error situations) / Non-Functional (performance or
>     benchmarking, security testing, fuzzy testing, load or stress testing,
>     compatibility testing, MTBF testing, etc)
> 
>     - by test running frequency: this test case should run
>     daily/weekly/fortnightly/once per milestone
> 
> 
>     And many other ways. I am deliberately introducing a lot of jargon here,
>     for those less familiar with the QA speech, please have a look at the
>     glossary or ask when in doubt, if we want to truly improve the test
>     cases we are writing we need to start thinking about all these things:
>     https://wiki.ubuntu.com/QATeam/Glossary
> 
>     Thanks,
>     Gema
> 
> 
> Hi Gema

Hi Alex,

> That's OK, we can handle the jargon.

I think this is not true for everyone in the list, so I'd like to give
everyone the opportunity not just to collaborate and add test cases but
also to learn in the process. You may be comfortable with those terms
but, some other people may not.

> 
> I think that in our case, categories should represent our way of work.

Categories should be related to the test cases and what they are trying
to test, and they are something different from runlists or set of test
cases that we run together.

> So for community team, current categories are probably fine, but for QA
> engineering they may not be well suited (you may want an additional
> manual/automatic note). 

I think we should all be doing QA engineering. I don't see why the
community work should be less accurate or valuable than the work done by
QA engineering (who are these people anyway? you mean Canonical QA
team?)... I think we are prepared to help whoever wants to do so, grow
as QA engineers and learn, also respecting those that just want to spend
some time running test cases and contributing in that way. Both are
engineering tasks and valuable, they are just different.

I don't believe community work has to mean lack of accuracy/quality or
lack of engineering... apologies if I have misinterpreted your words,
but I feel quite strongly about this.

>I don't think we should stumble on this issue
> for too long, so I'd recommend to go with the following scheme, and
> update it if we feel necessary. 

Agreed, we shouldn't be stuck by this, just have it on the back of our
minds and categorize test cases as we go along in a sensible way. We may
find that in Precise our old categories are fine, but whenever the test
suite grows we'll need to change that.

And anybody writing test cases should feel empowered to add/remove
categories if for whichever reason they don't make sense anymore or
start making more sense.

Also, we need to get people writing test cases thinking why they are
doing that, what is the aim of a test case? is it to find problems? is
it to demonstrate that a piece of functionality works? What is the pass
criteria? Because otherwise we won't be writing good test cases that
people can follow and report a result from.

> So it would go as this:
> 
> * *Applications* (for application related tests, such as testing
> editors, browsers, etc).
> * *System* (for testing system built ins, such as, maybe, services
> scripts, global/local settings, default system configurations, etc)
> * *Hardware* (for testing hardware components)
> * *Install* (for test cases performed during the installation process)
> * *Upgrade* (for test cases performed during the upgrade process)
> /* CasesMods (I have no idea what it is right now, so if anyone does
> please let us know)./

Agreed, this is as good starting point as any other, as long as we
realize it is the beginning :)

> I am going to use this selection on the Test Cases Rewriting document,
> and if anything changes we'll update accordingly.

Sounds good.

Thanks for all the ideas that are bouncing about, I think this is a very
healthy discussion to have.

Have a nice weekend folks!
Gema

-- 
Gema Gomez-Solano        <gema.gomez-solano at canonical.com>
QA Team                  https://launchpad.net/~gema.gomez
Canonical Ltd.           http://www.canonical.com




More information about the Ubuntu-qa mailing list