Charm Testing Spec Proposal

Gustavo Niemeyer gustavo.niemeyer at canonical.com
Wed Feb 1 21:57:06 UTC 2012


On Wed, Feb 1, 2012 at 19:11, Clint Byrum <clint at ubuntu.com> wrote:
> However, if I understand the problem correctly, each arrow is basically
> another exponential jump for each isolated graph.  So, testing haproxy
> would mean deploying with every app that provides an http interface,
> which is a lot.

Right. We can do a lot in the future, but we have to start somewhere.
Let's please keep that first iteration simple so that we can get
something we are all comfortable with in place before we move forward
to more advanced logic.

>> This looks good to me. I think the teardown at least wrt to the environment can
>> be automated, the charm tests just need to clean out any local state. A
>> useful automation for the tests would be running a verification script directly
>> on a given unit, rather than remotely poking it from the testrunner.
>
> I want to be able to use a single environment and not destroy it with
> every test run. I'd also like to be able to re-use machines, though

The cleanup can still be automatic even then.

> without any chroot/lxc support that seems like folly right now. The
> test runner is still going to be responsible for cleaning up after any
> charms that leave services lying around.

Right, so why do we need such a teardown in the test?

> I've added a blurb that says that the test runner may clean up services
> left behind and that tests *should* clean them up by themselves and
> extract any needed artifacts from the units before the test exits.

Why?  We have to automate it, so let's automate it and spare people
from the trouble consistently.

> The user they're running as is undefined, and no assumption is
> allowed there. Only that juju is in the path and that there may be
> some restrictions.

Yep, sounds good.

> Great idea. I think I'll leave it up to the test runner implementation,
> but just running debug-log into a file during the test seems like a
> simple way to achieve this.

+1

> Thats really an implementation detail, and I have a fairly simple idea
> on how to do that.

Can we please leave that for a follow up change in the spec so that we
can agree on the basics first?

> I think we should implement this basic, simple algorithm (test everything
> daily with the full delta set), and then iterate on it as we learn what
> breaks charms and how the debugging process goes. If we try to think
> through all the possibilities before we start, this is just never going
> to happen.

Agreed.

-- 
Gustavo Niemeyer
http://niemeyer.net
http://niemeyer.net/plus
http://niemeyer.net/twitter
http://niemeyer.net/blog

-- I'm not absolutely sure of anything.



More information about the Juju mailing list