Interpreting test results across test runs

Marc Tardif marc at
Tue Nov 16 20:01:23 UTC 2010

* Zygmunt Krynicki <zygmunt.krynicki at> [2010-11-12 00:02 +0100]:
> W dniu 03.11.2010 17:22, Marc Tardif pisze:
> > My question is: can we make any reasonable assumptions about the tests
> > that were not run? This can be a matter of opinion where one extreme
> > might not make any assumptions at all, whereas another extreme might
> > assume that test results remain the same until proven otherwise. So,
> > I'm calling for your opinions on what you consider is reasonable.
> Unless you have reliable information on how to handle such condition in
> the test case meta data *and* can sufficiently guarantee that the
> meda-data is accurate and up-to-date then you should do very little more
> than notify the user that the particular test was not run (or not
> present in the test result data, I don't know how you handle that part).

I'm not so much concerned about implementation details, so please disregard
any "handling" part. My question is purely conceptual when interpreting
test results across multiple test runs. I'm simply wondering what kind of
assumptions are reasonable in order to represent this information in a way
that matches user expectations.

Chris Gregan and his team, Massimo in particular, are particularly prone
to this use case where only running a single test should essentially
inherit the results from the previous test run. This is probably a side
effect of having to run manual tests because it's very resources intensive
to run all the tests again.

In additioni to this use case, I believe that this is actually quite common
when running automated unit tests. For example, if I just run the tests for
a particular module, I think it's perfectly reasonable to assume that all
the other tests still have the same results until proven otherwise, ie
until running the whole test suite again.

Furthermore, I also think it's helpful to make this assumption that test
results remain the same until proven otherwise when reporting test results.
When I look at a project, I want to see the all the latest test results
even though they might not have all run at the same time.

Marc Tardif <marc.tardif at>
Freenode: cr3, Jabber: cr3 at
1024D/72679CAD 09A9 D871 F7C4 A18F AC08 674D 2B73 740C 7267 9CAD

More information about the Ubuntu-qa mailing list