Charm Testing Spec Proposal

Kapil Thangavelu kapil.thangavelu at canonical.com
Wed Feb 1 20:03:13 UTC 2012


Excerpts from Clint Byrum's message of Wed Jan 25 20:34:31 -0500 2012:
> I don't know how to use the other tool that juju has been using for
> reviews, so if somebody wants to train me on that, thats fine. Otherwise,
> here is a plain old launchpad merge proposal:
> 
> https://code.launchpad.net/~clint-fewbar/charm-tools/charm-tests-spec/+merge/90232
> 
> Its also being generated into HTML here:
> 
> http://people.canonical.com/~clint/charm-tests.html
> 
> Comments, suggestions, and criticisms are more than welcome.
> 

Hi Clint,

Thanks to both you and mark for taking up charm testing.

The current ftests lp:juju/ftests are basically shell scripts that correspond  
to what's being specified here for a test case, albeit they don't bother with 
teardown or charm retrieval abstractions.

http://bazaar.launchpad.net/~juju/juju/ftests/view/head:/tests/ec2-wordpress/run.sh

I think given this spec, we could just incorporate those tests directly into the 
example charms for functional test runs.

Comments on the spec.

Re Generic Tests

There's an additional class of automated tests which would allow automated tests 
to verifying charm functionality. If we take any given charm, and establish its 
dependencies and its clients (reverse deps), we can assemble a series of 
environments where in the charm, its minimal dependencies, and their relations 
are established, with test iteration across its possible clients and their 
relations. The same logic of watching the state for an inactive time 
period(aka steady state) would allow for some basic sanity verification.

Just to get a better sense on what the graph might look like I tossed together 
this dot rendering of the entire charm universe.

http://kapilt.com/files/charm-graph.png

Re Charm specific tests.

This looks good to me. I think the teardown at least wrt to the environment can 
be automated, the charm tests just need to clean out any local state. A 
useful automation for the tests would be running a verification script directly 
on a given unit, rather than remotely poking it from the testrunner.

Is it intended that the tests run as a non root user inside of a container or 
just directly on the host. 

Re Output

It might be outside of the scope, but capturing the unit log files on failure 
would be helpful for debugging against automated test runs. 



One additional concern is that this piece.

"""
There's a special sub-command of juju, ``deploy-previous``, will deploy the
last successfully tested charm instead of the one from the current
 delta. This will allow testing upgrade-charm.
"""

implies some additional infrastructure, at least a test database recording test 
runs against versions.

It exposes a larger question that's largely unanswered here, namely that charms 
are typically deployed and tested against a charm graph, with each charm 
versioned independently. Succesful runs are against a versioned graph. To 
maintain a goal of being able to identify which new charm revision breaks a 
subset of the graph, requires gating/ordering of the charm change processing 
per the time of the changes across charms. Else things like deploy-previous may 
not work because of other changes in the charm graph.

cheers,

Kapil



More information about the Juju mailing list