[merge] doc how to use new test features

Martin Pool mbp at sourcefrog.net
Wed May 2 12:43:47 BST 2007


On 4/27/07, Robert Collins <robertc at robertcollins.net> wrote:
> On Thu, 2007-04-26 at 14:53 +1000, Martin Pool wrote:
> > === modified file 'HACKING'
> > --- HACKING   2007-04-24 14:19:24 +0000
> > +++ HACKING   2007-04-26 04:50:34 +0000
> > @@ -434,6 +434,50 @@
> >    __ http://docs.python.org/lib/module-doctest.html
> >
> >
> > +
> > +Skipping tests and test requirements
> > +------------------------------------
> > +
> > +In our enhancements to unittest we allow for some addition results beyond
> > +just success or failure.
> > +
> > +If a test can't be run, it can say that it's skipped.  This is typically
> > +used in parameterized tests - for example if a transport doesn't support
> > +setting permissions, we'll skip the tests that relating to that.  Skipped
> > +tests are appropriate when there's just no possibility that the test will
> > +ever run in this situation, and nothing either developers or users can do
> > +about it.  ::
>
> Actually, I disagree with this. Skipping is for things that *can* be
> fixed. Things that can't be fixed should not show any output as all as
> its just noise.

Well, I'm glad I posted it then because that's not how it's used at
present.  Many tests skip in just the way I described.  I'm happy if
we settle what the policy is and just say "but some existing code does
xyz."

> > +    try:
> > +        return self.branch_format.initialize(repo.bzrdir)
> > +    except errors.UninitializableFormat:
> > +        raise tests.TestSkipped('Uninitializable branch format')
>
> For example the above snippet is clearly fixable: Sit down and write the
> code to fix it.

Well, in a pedantic sense almost any skip is fixable, since it runs on
a general programmable machine.  I'm more interested in whether we
expect people to actually fix it, or whether they must fix it to merge
it.  In this case it could be an old deprecated format and it would
probably not be a good use of time to write the code.

> > +Known failures are when a test exists but we know it currently doesn't
> > +work, allowing the test suite to still pass.  These should be used with
> > +care, we don't want a proliferation of quietly broken tests.  It might be
> > +appropriate to use them if you've committed a test for a bug but not the
> > +fix for it, or if something works on Unix but not on Windows.
>
> +1 with the skipping thing adjusted - Rob

So if I understand correctly,

 silently return - this test just doesn't make sense in this case
 skip - can't run this test yet because of a Bazaar limitation; plus
for historical reasons this is used for other cases
 known failure - can run this test, but it fails because of a Bazaar
bug/limitation
 missing feature - can't run this test because of an environmental
limit; could be run and pass on another machine or if more software
was installed or something similar

This raises the question of when it's acceptable to have tests in any
of these cases.  Will we merge code with known failures?  Are we going
to try to drive the skip count to 0?

-- 
Martin



More information about the bazaar mailing list