[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

Clint Byrum clint at fewbar.com
Thu Jan 19 19:16:28 UTC 2012


Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work. Items blocked on spec.
  
  Work Items:
- [niemeyer] write spec for charm testing facility: TODO
- implement specified testing framework: BLOCKED
- deploy testing framework for use with local provider: BLOCKED
- deploy testing framework for use against ec2: BLOCKED
- deploy testing framework for use against canonistack: BLOCKED
- deploy testing framework for use against orchestra (managing VMs instead of machines): BLOCKED
- write charm tests for mysql: BLOCKED
- [clint-fewbar] write charm tests for haproxy: BLOCKED
- [clint-fewbar] write charm tests for wordpress: BLOCKED
- [mark-mims] write charm tests for hadoop: BLOCKED
- [james-page] add openstack tests: BLOCKED
+ write spec for charm testing facility: INPROGRESS
+ implement specified testing framework: TODO
+ deploy testing framework for use with local provider: TODO
+ deploy testing framework for use against ec2: TODO
+ deploy testing framework for use against canonistack: TODO
+ deploy testing framework for use against orchestra (managing VMs instead of machines): TODO
+ write charm tests for mysql: TODO
+ [clint-fewbar] write charm tests for haproxy: TODO
+ [clint-fewbar] write charm tests for wordpress: TODO
+ [mark-mims] write charm tests for hadoop: TODO
+ [james-page] add openstack tests: TODO
  [mark-mims] jenkins charm to spawn basic charm tests: DONE
  [mark-mims] basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane (_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
      * Tricky to _verify_, and not an enforced convention at the moment, so not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in "lock step" mode, so that breaking charms can be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any charms that
    implement such interfaces. In addition to working as tests, this is also a pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
- 
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for "installed" status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
      calculate graph of all charms in store which require interface and all of its dependency combinations
      deploy requiring charm w/ dependencies and providing service
      add-relation between requiring/providing
      for test in provides/interface ; do
        run test with name of deployed requiring service
  for interface in requires ; do
      repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing



More information about the Ubuntu-server-bugs mailing list