Feedback on a base "fake" type in the testing repo
Gustavo Niemeyer
gustavo at niemeyer.net
Fri Feb 13 18:36:10 UTC 2015
On Fri, Feb 13, 2015 at 3:25 PM, Eric Snow <eric.snow at canonical.com> wrote:
>> This is a "mock object" under some well known people's terminology [1].
>
> With all due respect to Fowler, the terminology in this space is
> fairly muddled still. :)
Sure, I'm happy to use any terminology, but I'd prefer to not make one
up just now.
>> The most problematic aspect of this approach is that tests are pretty
>> much always very closely tied to the implementation, in a way that you
>> suddenly cannot touch the implementation anymore without also fixing a
>> vast number tests to comply.
>
> Let's look at this from the context of "unit" (i.e. function
> signature) testing. By "implementation" do you mean you mean the
> function you are testing, or the low-level API the function is using,
> or both? If the low-level API then it seems like the "real fake
> object" you describe further on would help by moving at least part of
> the test setup down out of the test and down into the fake. However
> aren't you then just as susceptible to changes in the fake with the
> same maintenance consequences?
No, because the fake should behave as a normal type would, instead of
expecting a very precisely constrained orchestration of calls into its
interface. If we hand the implementation a fake value, it should be
able to call that value as many times as it wants, with whatever
parameters it wants, in whatever order it wants, and its behavior
should be consistent with a realistic implementation. Again, see the
dummy provider for a convenient example of that in practice.
> Ultimately I just don't see how you can avoid depending on low-level
> details ("closely tied to the implementation") in your tests and still
> have confidence that you are testing things rigorously. I think the
I could perceive that on your original email, and it's precisely why
I'm worried and responding to this thread.
If that logic held any ground, we'd never be able to have
organizations that could certify the quality and conformance of
devices based on the device itself. Instead, they'd have to go into
the industries to see how the device was manufactured. But that's not
what it happens.. these organizations get the outcome of the
production line, no matter how that worked, because that's the most
relevant thing to test. You can change the production line, you can
optimize it away, and you can even replace entire components, and it
doesn't matter as long as you preserve the quality of the outcome. Of
course, on the way to producing a device you'll generally make use of
smaller devices, which have their own production lines, and which
ensure that the outcome of their own production lines is of high
quality.
The same thing is true in code. If you spend a lot of time writing
tests for your production line, you are optimizing for the wrong goal.
You are spending a lot of time, the outcome can still be of poor
quality, and you are making it hard to optimize your production line
and potentially replace its components and methods by something
completely different. Of course, as in actual devices, code is
layered, so sub-components can be tested on their own to ensure their
promised interfaces hold water, but even there what matters is
ensuring that what they promise is being satisfied, rather than how
they are doing it.
> Also, the testing world puts a lot are emphasis on branch coverage in
> tests. It almost sounds like you are suggesting that is not such an
> important goal. Could you clarify? Perhaps I'm inferring too much
> from what you've said. :)
I'd be happy to dive into that, but it's a distraction in this
conversation. You can use or not use your coverage tool irrespective
of your testing approach.
>> As a recommendation to avoid digging a hole -- one that is pretty
>> difficult to climb out of once you're in -- instead of testing method
>> calls and cooking fake return values in your own test, build a real
>> fake object: one that pretends to be a real implementation of that
>> interface, and understands the business logic of it. Then, have
>> methods on it that allow tailoring its behavior, but in a high-level
>> way, closer to the problem than to the code.
>
> Ah, I like that! So to rephrase, instead of a type where you just
> track calls and explicitly control return values, it is better to use
> a type that implements your expectations about the low-level system,
> exposed via the same API as the actual one? This would likely still
> involve both to implement the same interface, right? The thing I like
That's right.
> about that approach is that is forces you to "document" your
> expectations (i.e. dependencies) as code. The problem is that you pay
> (in development time and in complexity) for an extra layer to engineer
This is irrelevant if you take into account the monumental future cost
of mocking everything up.
> Regardless, as I noted in an earlier message, I think testing needs to involve:
>
> 1. a mix of high branch coverage through isolated unit tests,
I'd be very careful to not overdo this. Covering a line just for the
fun of seeing the CPU passing there is irrelevant. If you fake every
single thing around it with no care, you'll have a CPU pointer jumping
in and out of it, without any relevant achievement. I have seen over
and over "isolated unit tests" which blew up when put in context,
after some monumental work wasted. The parameter for faking and
isolating things out should be timing and feasibility, not the
pretentious "unity purity" perfectionist metric. If it's fast and
doesn't involve remote systems, use the real thing.
> 2. "enough" testing to ensure your expectations for the low-level API are met,
> 3. "enough" coverage of the full stack (at least common-path) via
> integration tests.
>
> Your recommendation on a low-level implementation to use for testing
> is a good one (and one I'll make use of), but it's only one piece of
> the testing puzzle. That said, I don't think you point is that it's
> the only testing approach one should use. :) I appreciate you
It's indeed not. But mocking is pretty much always a testing approach
one should not use.
gustavo @ http://niemeyer.net
More information about the Juju-dev
mailing list