[rfc] add more possible test results

Vincent Ladeuil v.ladeuil+lp at free.fr
Fri Jul 13 10:34:49 BST 2007


>>>>> "mbp" == Martin Pool <mbp at sourcefrog.net> writes:

<snip/>

    mbp> +
    mbp> +TestSkipped
    mbp> +        Generic skip; the only type that was present up to bzr 0.18.
    mbp> +

If I understand correctly this will become a base class that we
don't want to use anymore and we will replace all existing
occurrences in the actual code progressively.

    mbp> +TestNotApplicable
    mbp> +        The test doesn't apply to the parameters with which it was run.
    mbp> +        This is typically used when the test is being applied to all
    mbp> +        implementations of an interface, but some aspects of the interface
    mbp> +        are optional and not present in particular concrete
    mbp> +        implementations.  (Some tests that should raise this currently
    mbp> +        either silently return or raise TestSkipped.)  Another option is
    mbp> +        to use more precise parameterization to avoid generating the test
    mbp> +        at all.
    mbp> +

For example, all implementation transport tests that needs a
writable transport will now (or will in the future) raise
TestNotApplicable instead of returning isn't it ?

    mbp> +TestPlatformLimit
    mbp> +        The test can't be run because of an inherent limitation of the
    mbp> +        environment, such as not having symlinks or not supporting
    mbp> +        unicode.
    mbp> +

OK.

    mbp> +TestDependencyMissing
    mbp> +        The test can't be run because a dependency (typically a Python
    mbp> +        library) is not available in the test environment.  These
    mbp> +        are in general things that the person running the test could fix 
    mbp> +        by installing the library.  It's OK if some of these occur when 
    mbp> +        an end user runs the tests or if we're specifically testing in a
    mbp> +        limited environment, but a full test should never see them.
    mbp> +

Hmm. I have a problem with this one. What guaranty do we have
that the code is tested with and *without* the dependency ? The
point of a full test is to test... as much as possible, I'm
afraid that associating TestDependencyMissing with failure will
tend to hide the bugs triggered in the absence of the dependency.

    mbp> +KnownFailure
    mbp> +        The test exists but is known to fail, for example because the 
    mbp> +        code to fix it hasn't been run yet.  Raising this allows 
    mbp> +        you to distinguish these failures from the ones that are not 
    mbp> +        expected to fail.  This could be conditionally raised if something
    mbp> +        is broken on some platforms but not on others.
    mbp> +

At first read I thought KnownFailure may be used when:

- I wrote a test exhibiting a bug,

- I wrote some tests specifying a desired behavior.

And in both cases I don't have time or fix or implement the
corresponding code.

Or should I just keep that in private branches as failing tests ?

Otherwise, I'd like to see an example of a test that is known to
fail on a platform but not an another and how you use
KnownFailure.

    mbp> +We plan to support three modes for running the test suite to control the
    mbp> +interpretation of these results.  Strict mode is for use in situations
    mbp> +like merges to the mainline and releases where we want to make sure that
    mbp> +everything that can be tested has been tested.  Lax mode is for use by
    mbp> +developers who want to temporarily tolerate some known failures.  The
    mbp> +default behaviour is obtained by ``bzr selftest`` with no options, and
    mbp> +also (if possible) by running under another unittest harness.
    mbp> +
    mbp> +======================= ======= ======= ========
    mbp> +result                  strict  default lax
    mbp> +======================= ======= ======= ========
    mbp> +TestSkipped             pass    pass    pass
    mbp> +TestNotApplicable       pass    pass    pass
    mbp> +TestPlatformLimit       pass    pass    pass
    mbp> +TestDependencyMissing   fail    pass    pass
    mbp> +KnownFailure            fail    fail    pass
    mbp> +======================= ======= ======= ========
    mbp> +     
    mbp> +
    mbp> +Test feature dependencies
    mbp> +-------------------------
    mbp> +
    mbp> +Rather than manually checking the environment in each test, a test class
    mbp> +can declare its dependence on some test features.  The feature objects are
    mbp> +checked only once for each run of the whole test suite.
    mbp> +
    mbp> +For historical reasons, as of May 2007 many cases that should depend on
    mbp> +features currently raise TestSkipped.)
    mbp> +
    mbp> +::
 
    mbp>      class TestStrace(TestCaseWithTransport):
 
    mbp>          _test_needs_features = [StraceFeature]
 
    mbp> -which means all tests in this class need the feature.  The feature itself
    mbp> +This means all tests in this class need the feature.  The feature itself
    mbp>  should provide a ``_probe`` method which is called once to determine if
    mbp>  it's available.
 
I have mixed feelings about specifying properties at the class
level or at the test level. May be this distinction (and the
rationals) should be explained better ?

Sorry to be so vague, I can't put my finger on the why I'm
uncomfortable, but I prefer speaking about it than stay silent
since you want to implement things.

On the other hand, I'm all in favor of qualifying tests as much
as possible, for possible failures as you plan to do here, as
well as linking tests to the features they verify. So don't let
my hesitations refrain you from going ahead :)

      Vincent



More information about the bazaar mailing list