selftest performance: landing our code faster and developing quicker.
mbp at canonical.com
Mon Aug 31 05:59:29 BST 2009
2009/8/28 Vincent Ladeuil <v.ladeuil+lp at free.fr>:
> martin> If we could, for example, let people write blackbox
> martin> tests in something that looks like shell doctest, but
> martin> that's actually abstracted to be much faster than
> martin> running from the whole command line down, that would
> martin> be very cool.
> But I consider such a tool to be targeted at people who don't
> have the time to learn more about our test infrastructure, or a
> way to introduce them to more focused tests.
In part, but it's also something I think core team members would use -
in fact it has to be something that's useful to us, otherwise it will
always be a bit second class.
Many of the blackbox or UI oriented bugs are hard to read or maintain
and would be better if they could be written in a doctest like style,
for example the interactive mereg tests.
> martin> Python gives fairly weak assurance that interfaces
> martin> actually match up, so I think it's relatively more
> martin> important that we do test things integrated together
> martin> rather than in isolation.
> Can we stop that war even before it starts please ?
> It's not one against the other, both are valuable and needed. If
> we can't write both, then, well too bad, but don't let it be an
> excuse for writing less tests.
> One of the most important property of a test is: Defect Localization.
> Do as you feel but keep that one in mind, we don't want hundreds
> of failures when a bug is introduced, we want a single failure
> telling us: sorry, you broke this assumption, go back to the
> drawing board or we want several failures telling us, doing this
> change broke these cases.
> We don't want dozens or hundreds of tests all failing for the
> same reason, we don't tests failing repeatedly because they are
> to eager and needs several fixes to pass (I hate those ones).
I'm not suggesting having only massive slow blackbox tests, as you
seem to think.
What I am suggesting is generally avoiding writing tests that use
dummy implementations of real classes - mock objects etc. Only do
that when the real implementation would be unreasonably expensive or
difficult to use.
I don't know if we can take this thread much further in this direction
because it depends a lot on the particular case. But I do have a
general impression that when we test against dummy objects, we
generally do not get great tests (wrt sensitivity, maintainability),
and when we arrange that the real objects are more testable things
seem to work better.
More information about the bazaar