Test Cases categories

Brendan Donegan brendan.donegan at canonical.com
Fri Dec 9 08:13:29 UTC 2011


On 08/12/11 21:09, Alex Lourie wrote:
> On Thu, Dec 8, 2011 at 7:57 PM, Gema Gomez
> <gema.gomez-solano at canonical.com
> <mailto:gema.gomez-solano at canonical.com>> wrote:
>
>     On 08/12/11 15:06, Alex Lourie wrote:
>     > Hi all
>     >
>     > Proceeding with the work we started for test case rewriting,
>     there's an
>     > issue I'd like to discuss here - categorising the test cases.
>     How would
>     > we like it to be? What categories would you think should be
>     created? How
>     > do we decided the relation of a test case to a specific
>     category? Can
>     > any given test be part of more than one categories?
>     >
>     > Please share your thoughts,
>     > Thanks.
>     >
>     > --
>     > Alex Lourie
>     >
>     >
>
>     The categorization we have at the moment is:
>
>     - Applications
>     - System
>     - Hardware
>     - Install
>     - Upgrade
>     - CasesMods (not sure what this even means)
>
>     There are many ways to categorize test cases:
>
>     - by functionality under test (like we are sort of doing, but not
>     quite)
>
>     - by test type
>            * positive/negative
>            * smoke: target the system horizontally and superficially /
>     regression:
>     target vertical slices of the system, in depth
>            * Unit testing (target an api method, or a very small
>     functionality)/Integration testing (target the integration of two or
>     more subsystems)/System testing (target the system as a whole)
>            * Functional (target functionality, the system behaves as
>     it should and
>     fails gracefully in error situations) / Non-Functional (performance or
>     benchmarking, security testing, fuzzy testing, load or stress testing,
>     compatibility testing, MTBF testing, etc)
>
>     - by test running frequency: this test case should run
>     daily/weekly/fortnightly/once per milestone
>
>
>     And many other ways. I am deliberately introducing a lot of jargon
>     here,
>     for those less familiar with the QA speech, please have a look at the
>     glossary or ask when in doubt, if we want to truly improve the test
>     cases we are writing we need to start thinking about all these things:
>     https://wiki.ubuntu.com/QATeam/Glossary
>
>     Thanks,
>     Gema
>
>
> Hi Gema
> That's OK, we can handle the jargon.
>
> I think that in our case, categories should represent our way of work.
> So for community team, current categories are probably fine, but for
> QA engineering they may not be well suited (you may want an additional
> manual/automatic note). I don't think we should stumble on this issue
> for too long, so I'd recommend to go with the following scheme, and
> update it if we feel necessary. So it would go as this:
>
> * *Applications* (for application related tests, such as testing
> editors, browsers, etc).
> * *System* (for testing system built ins, such as, maybe, services
> scripts, global/local settings, default system configurations, etc)
> * *Hardware* (for testing hardware components)
> * *Install* (for test cases performed during the installation process)
> * *Upgrade* (for test cases performed during the upgrade process)
> /* CasesMods (I have no idea what it is right now, so if anyone does
> please let us know)./
Looking at the wiki, this is not in fact a separate category but rather
a page that holds information on cases which need to be updated,
suggestions for new cases, etc
>
>
> I am going to use this selection on the Test Cases Rewriting document,
> and if anything changes we'll update accordingly.
>  
> -- 
> Alex Lourie
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-quality/attachments/20111209/790ed0e2/attachment.html>


More information about the Ubuntu-qa mailing list