Rev 2758: (mbp) add TestNotApplicable and multiply_tests_from_modules in file:///home/pqm/archives/thelove/bzr/%2Btrunk/

Canonical.com Patch Queue Manager pqm at pqm.ubuntu.com
Tue Aug 28 08:52:03 BST 2007


At file:///home/pqm/archives/thelove/bzr/%2Btrunk/

------------------------------------------------------------
revno: 2758
revision-id: pqm at pqm.ubuntu.com-20070828075200-y6ww43ym8xt0fv21
parent: pqm at pqm.ubuntu.com-20070828072028-0fvw3cajmigl3h2j
parent: mbp at sourcefrog.net-20070828065920-gex43lnkotnlusue
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Tue 2007-08-28 08:52:00 +0100
message:
  (mbp) add TestNotApplicable and multiply_tests_from_modules
modified:
  NEWS                           NEWS-20050323055033-4e00b5db738777ff
  bzrlib/tests/__init__.py       selftest.py-20050531073622-8d0e3c8845c97a64
  bzrlib/tests/test_selftest.py  test_selftest.py-20051202044319-c110a115d8c0456a
  doc/developers/HACKING.txt     HACKING-20050805200004-2a5dc975d870f78c
    ------------------------------------------------------------
    revno: 2729.1.6
    merged: mbp at sourcefrog.net-20070828065920-gex43lnkotnlusue
    parent: mbp at sourcefrog.net-20070828065640-2mozn99tzfg2zcwp
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Tue 2007-08-28 16:59:20 +1000
    message:
      Update docs to say xfail does not cause overall failure in default test runs, which is true at the moment
    ------------------------------------------------------------
    revno: 2729.1.5
    merged: mbp at sourcefrog.net-20070828065640-2mozn99tzfg2zcwp
    parent: mbp at sourcefrog.net-20070828020209-gbhb0onl14e1fjty
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Tue 2007-08-28 16:56:40 +1000
    message:
      Update report_not_applicable for _testTimeString fix
    ------------------------------------------------------------
    revno: 2729.1.4
    merged: mbp at sourcefrog.net-20070828020209-gbhb0onl14e1fjty
    parent: mbp at sourcefrog.net-20070821050305-64tehj3dk3d4y1px
    parent: pqm at pqm.ubuntu.com-20070828012914-ghechpk19ejwk5um
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Tue 2007-08-28 12:02:09 +1000
    message:
      merge trunk
    ------------------------------------------------------------
    revno: 2729.1.3
    merged: mbp at sourcefrog.net-20070821050305-64tehj3dk3d4y1px
    parent: mbp at sourcefrog.net-20070821030216-2bfghdwpyk2se1of
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Tue 2007-08-21 15:03:05 +1000
    message:
      TestScenarioAdapter must be a list, not an iter
    ------------------------------------------------------------
    revno: 2729.1.2
    merged: mbp at sourcefrog.net-20070821030216-2bfghdwpyk2se1of
    parent: mbp at sourcefrog.net-20070820091750-n8fz1x3bml1r95oq
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Tue 2007-08-21 13:02:16 +1000
    message:
      Add new multiply_tests_from_modules to give a simpler interface to test scenarios
    ------------------------------------------------------------
    revno: 2729.1.1
    merged: mbp at sourcefrog.net-20070820091750-n8fz1x3bml1r95oq
    parent: pqm at pqm.ubuntu.com-20070820064929-dee3a9wfukd1qwfb
    committer: Martin Pool <mbp at sourcefrog.net>
    branch nick: test-scenarios
    timestamp: Mon 2007-08-20 19:17:50 +1000
    message:
      Add TestNotApplicable exception and handling of it; document test parameterization
=== modified file 'NEWS'
--- a/NEWS	2007-08-28 07:20:28 +0000
+++ b/NEWS	2007-08-28 07:52:00 +0000
@@ -70,6 +70,17 @@
    * ``Transport.should_cache`` has been removed.  It was not called in the
      previous release.  (Martin Pool)
 
+  TESTING:
+
+   * Tests may now raise TestNotApplicable to indicate they shouldn't be 
+     run in a particular scenario.  (Martin Pool)
+
+   * New function multiply_tests_from_modules to give a simpler interface
+     to test parameterization.  (Martin Pool, Robert Collins)
+
+   * ``Transport.should_cache`` has been removed.  It was not called in the
+     previous release.  (Martin Pool)
+
    * NULL_REVISION is returned to indicate the null revision, not None.
      (Aaron Bentley)
 

=== modified file 'bzrlib/tests/__init__.py'
--- a/bzrlib/tests/__init__.py	2007-08-28 05:18:17 +0000
+++ b/bzrlib/tests/__init__.py	2007-08-28 07:52:00 +0000
@@ -107,16 +107,18 @@
 
 MODULES_TO_TEST = []
 MODULES_TO_DOCTEST = [
-                      bzrlib.timestamp,
-                      bzrlib.errors,
-                      bzrlib.export,
-                      bzrlib.inventory,
-                      bzrlib.iterablefile,
-                      bzrlib.lockdir,
-                      bzrlib.merge3,
-                      bzrlib.option,
-                      bzrlib.store,
-                      ]
+        bzrlib.timestamp,
+        bzrlib.errors,
+        bzrlib.export,
+        bzrlib.inventory,
+        bzrlib.iterablefile,
+        bzrlib.lockdir,
+        bzrlib.merge3,
+        bzrlib.option,
+        bzrlib.store,
+        # quoted to avoid module-loading circularity
+        'bzrlib.tests',
+        ]
 
 
 def packages_to_test():
@@ -204,6 +206,7 @@
         self.failure_count = 0
         self.known_failure_count = 0
         self.skip_count = 0
+        self.not_applicable_count = 0
         self.unsupported = {}
         self.count = 0
         self._overall_start_time = time.time()
@@ -327,9 +330,12 @@
         self.report_unsupported(test, feature)
 
     def _addSkipped(self, test, skip_excinfo):
-        self.report_skip(test, skip_excinfo)
-        # seems best to treat this as success from point-of-view of
-        # unittest -- it actually does nothing so it barely matters :)
+        if isinstance(skip_excinfo[1], TestNotApplicable):
+            self.not_applicable_count += 1
+            self.report_not_applicable(test, skip_excinfo)
+        else:
+            self.skip_count += 1
+            self.report_skip(test, skip_excinfo)
         try:
             test.tearDown()
         except KeyboardInterrupt:
@@ -337,6 +343,8 @@
         except:
             self.addError(test, test.__exc_info())
         else:
+            # seems best to treat this as success from point-of-view of unittest
+            # -- it actually does nothing so it barely matters :)
             unittest.TestResult.addSuccess(self, test)
 
     def printErrorList(self, flavour, errors):
@@ -370,7 +378,6 @@
         return self.wasSuccessful()
 
 
-
 class TextTestResult(ExtendedTestResult):
     """Displays progress and results of tests in text form"""
 
@@ -441,20 +448,10 @@
             self._test_description(test), err[1])
 
     def report_skip(self, test, skip_excinfo):
-        self.skip_count += 1
-        if False:
-            # at the moment these are mostly not things we can fix
-            # and so they just produce stipple; use the verbose reporter
-            # to see them.
-            if False:
-                # show test and reason for skip
-                self.pb.note('SKIP: %s\n    %s\n', 
-                    self._shortened_test_description(test),
-                    skip_excinfo[1])
-            else:
-                # since the class name was left behind in the still-visible
-                # progress bar...
-                self.pb.note('SKIP: %s', skip_excinfo[1])
+        pass
+
+    def report_not_applicable(self, test, skip_excinfo):
+        pass
 
     def report_unsupported(self, test, feature):
         """test cannot be run because feature is missing."""
@@ -520,11 +517,15 @@
         self.stream.flush()
 
     def report_skip(self, test, skip_excinfo):
-        self.skip_count += 1
         self.stream.writeln(' SKIP %s\n%s'
                 % (self._testTimeString(test),
                    self._error_summary(skip_excinfo)))
 
+    def report_not_applicable(self, test, skip_excinfo):
+        self.stream.writeln('  N/A %s\n%s'
+                % (self._testTimeString(test),
+                   self._error_summary(skip_excinfo)))
+
     def report_unsupported(self, test, feature):
         """test cannot be run because feature is missing."""
         self.stream.writeln("NODEP %s\n    The feature '%s' is not available."
@@ -629,6 +630,15 @@
     """Indicates that a test was intentionally skipped, rather than failing."""
 
 
+class TestNotApplicable(TestSkipped):
+    """A test is not applicable to the situation where it was run.
+
+    This is only normally raised by parameterized tests, if they find that 
+    the instance they're constructed upon does not support one aspect 
+    of its interface.
+    """
+
+
 class KnownFailure(AssertionError):
     """Indicates that a test failed in a precisely expected manner.
 
@@ -2505,6 +2515,47 @@
     return suite
 
 
+def multiply_tests_from_modules(module_name_list, scenario_iter):
+    """Adapt all tests in some given modules to given scenarios.
+
+    This is the recommended public interface for test parameterization.
+    Typically the test_suite() method for a per-implementation test
+    suite will call multiply_tests_from_modules and return the 
+    result.
+
+    :param module_name_list: List of fully-qualified names of test
+        modules.
+    :param scenario_iter: Iterable of pairs of (scenario_name, 
+        scenario_param_dict).
+
+    This returns a new TestSuite containing the cross product of
+    all the tests in all the modules, each repeated for each scenario.
+    Each test is adapted by adding the scenario name at the end 
+    of its name, and updating the test object's __dict__ with the
+    scenario_param_dict.
+
+    >>> r = multiply_tests_from_modules(
+    ...     ['bzrlib.tests.test_sampler'],
+    ...     [('one', dict(param=1)), 
+    ...      ('two', dict(param=2))])
+    >>> tests = list(iter_suite_tests(r))
+    >>> len(tests)
+    2
+    >>> tests[0].id()
+    'bzrlib.tests.test_sampler.DemoTest.test_nothing(one)'
+    >>> tests[0].param
+    1
+    >>> tests[1].param
+    2
+    """
+    loader = TestLoader()
+    suite = TestSuite()
+    adapter = TestScenarioApplier()
+    adapter.scenarios = list(scenario_iter)
+    adapt_modules(module_name_list, adapter, loader, suite)
+    return suite
+
+
 def adapt_modules(mods_list, adapter, loader, suite):
     """Adapt the modules in mods_list using adapter and add to suite."""
     for test in iter_suite_tests(loader.loadTestsFromModuleNames(mods_list)):

=== modified file 'bzrlib/tests/test_selftest.py'
--- a/bzrlib/tests/test_selftest.py	2007-08-21 03:53:07 +0000
+++ b/bzrlib/tests/test_selftest.py	2007-08-28 02:02:09 +0000
@@ -48,6 +48,7 @@
                           TestCaseInTempDir,
                           TestCaseWithMemoryTransport,
                           TestCaseWithTransport,
+                          TestNotApplicable,
                           TestSkipped,
                           TestSuite,
                           TestUtil,
@@ -1135,6 +1136,27 @@
         # Check if cleanup was called the right number of times.
         self.assertEqual(0, test.counter)
 
+    def test_not_applicable(self):
+        # run a test that is skipped because it's not applicable
+        def not_applicable_test():
+            from bzrlib.tests import TestNotApplicable
+            raise TestNotApplicable('this test never runs')
+        out = StringIO()
+        runner = TextTestRunner(stream=out, verbosity=2)
+        test = unittest.FunctionTestCase(not_applicable_test)
+        result = self.run_test_runner(runner, test)
+        self._log_file.write(out.getvalue())
+        self.assertTrue(result.wasSuccessful())
+        self.assertTrue(result.wasStrictlySuccessful())
+        self.assertContainsRe(out.getvalue(),
+                r'(?m)not_applicable_test   * N/A')
+        self.assertContainsRe(out.getvalue(),
+                r'(?m)^    this test never runs')
+
+    def test_not_applicable_demo(self):
+        # just so you can see it in the test output
+        raise TestNotApplicable('this test is just a demonstation')
+
     def test_unsupported_features_listed(self):
         """When unsupported features are encountered they are detailed."""
         class Feature1(Feature):

=== modified file 'doc/developers/HACKING.txt'
--- a/doc/developers/HACKING.txt	2007-08-09 05:56:20 +0000
+++ b/doc/developers/HACKING.txt	2007-08-28 06:59:20 +0000
@@ -349,24 +349,84 @@
 test was not run, rather than just returning which makes it look as if it
 was run and passed.
 
-A subtly different case is a test that should run, but can't run in the
-current environment.  This covers tests that can only run in particular
-operating systems or locales, or that depend on external libraries.  Here
-we want to inform the user that they didn't get full test coverage, but
-they possibly could if they installed more libraries.  These are expressed
-as a dependency on a feature so we can summarise them, and so that the
-test for the feature is done only once.  (For historical reasons, as of
-May 2007 many cases that should depend on features currently raise
-TestSkipped.)  The typical use is::
+Several different cases are distinguished:
+
+TestSkipped
+        Generic skip; the only type that was present up to bzr 0.18.
+
+TestNotApplicable
+        The test doesn't apply to the parameters with which it was run.
+        This is typically used when the test is being applied to all
+        implementations of an interface, but some aspects of the interface
+        are optional and not present in particular concrete
+        implementations.  (Some tests that should raise this currently
+        either silently return or raise TestSkipped.)  Another option is
+        to use more precise parameterization to avoid generating the test
+        at all.
+
+TestPlatformLimit
+        **(Not implemented yet)**
+        The test can't be run because of an inherent limitation of the
+        environment, such as not having symlinks or not supporting
+        unicode.
+
+UnavailableFeature
+        The test can't be run because a dependency (typically a Python
+        library) is not available in the test environment.  These
+        are in general things that the person running the test could fix 
+        by installing the library.  It's OK if some of these occur when 
+        an end user runs the tests or if we're specifically testing in a
+        limited environment, but a full test should never see them.
+
+KnownFailure
+        The test exists but is known to fail, for example because the 
+        code to fix it hasn't been run yet.  Raising this allows 
+        you to distinguish these failures from the ones that are not 
+        expected to fail.  This could be conditionally raised if something
+        is broken on some platforms but not on others.
+
+We plan to support three modes for running the test suite to control the
+interpretation of these results.  Strict mode is for use in situations
+like merges to the mainline and releases where we want to make sure that
+everything that can be tested has been tested.  Lax mode is for use by
+developers who want to temporarily tolerate some known failures.  The
+default behaviour is obtained by ``bzr selftest`` with no options, and
+also (if possible) by running under another unittest harness.
+
+======================= ======= ======= ========
+result                  strict  default lax
+======================= ======= ======= ========
+TestSkipped             pass    pass    pass
+TestNotApplicable       pass    pass    pass
+TestPlatformLimit       pass    pass    pass
+TestDependencyMissing   fail    pass    pass
+KnownFailure            fail    pass    pass
+======================= ======= ======= ========
+     
+
+Test feature dependencies
+-------------------------
+
+Rather than manually checking the environment in each test, a test class
+can declare its dependence on some test features.  The feature objects are
+checked only once for each run of the whole test suite.
+
+For historical reasons, as of May 2007 many cases that should depend on
+features currently raise TestSkipped.)
+
+::
 
     class TestStrace(TestCaseWithTransport):
 
         _test_needs_features = [StraceFeature]
 
-which means all tests in this class need the feature.  The feature itself
+This means all tests in this class need the feature.  The feature itself
 should provide a ``_probe`` method which is called once to determine if
 it's available.
 
+These should generally be equivalent to either TestDependencyMissing or
+sometimes TestPlatformLimit.
+
 
 Known failures
 --------------
@@ -410,6 +470,58 @@
 they're displayed or handled.
 
 
+Interface implementation testing and test scenarios
+---------------------------------------------------
+
+There are several cases in Bazaar of multiple implementations of a common 
+conceptual interface.  ("Conceptual" because 
+it's not necessary for all the implementations to share a base class,
+though they often do.)  Examples include transports and the working tree,
+branch and repository classes. 
+
+In these cases we want to make sure that every implementation correctly
+fulfils the interface requirements.  For example, every Transport should
+support the ``has()`` and ``get()`` and ``clone()`` methods.  We have a
+sub-suite of tests in ``test_transport_implementations``.  (Most
+per-implementation tests are in submodules of ``bzrlib.tests``, but not
+the transport tests at the moment.)  
+
+These tests are repeated for each registered Transport, by generating a
+new TestCase instance for the cross product of test methods and transport
+implementations.  As each test runs, it has ``transport_class`` and
+``transport_server`` set to the class it should test.  Most tests don't
+access these directly, but rather use ``self.get_transport`` which returns
+a transport of the appropriate type.
+
+The goal is to run per-implementation only tests that relate to that
+particular interface.  Sometimes we discover a bug elsewhere that happens
+with only one particular transport.  Once it's isolated, we can consider 
+whether a test should be added for that particular implementation,
+or for all implementations of the interface.
+
+The multiplication of tests for different implementations is normally 
+accomplished by overriding the ``test_suite`` function used to load 
+tests from a module.  This function typically loads all the tests,
+then applies a TestProviderAdapter to them, which generates a longer 
+suite containing all the test variations.
+
+
+Test scenarios
+--------------
+
+Some utilities are provided for generating variations of tests.  This can
+be used for per-implementation tests, or other cases where the same test
+code needs to run several times on different scenarios.
+
+The general approach is to define a class that provides test methods,
+which depend on attributes of the test object being pre-set with the
+values to which the test should be applied.  The test suite should then
+also provide a list of scenarios in which to run the tests.
+
+Typically ``multiply_tests_from_modules`` should be called from the test
+module's ``test_suite`` function.
+
+
 Essential Domain Classes
 ########################
 




More information about the bazaar-commits mailing list