Static Analysis tests
Andrew Wilkins
andrew.wilkins at canonical.com
Thu Apr 28 03:39:24 UTC 2016
On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.finch at canonical.com>
wrote:
> From the other thread:
>
> I wrote a test that parses the entire codebase under github.com/juju/juju to
> look for places where we're creating a new value of crypto/tls.Config
> instead of using the new helper function that I wrote that creates one with
> more secure defaults. It takes 16.5 seconds to run on my machine. There's
> not really any getting around the fact that parsing the whole tree takes a
> long time.
>
> What I *don't* want is to put these tests somewhere else which requires
> more thought/setup to run. So, no separate long-tests directory or
> anything. Keep the tests close to the code and run in the same way we run
> unit tests.
>
The general answer to this belongs back in the other thread, but I agree
that long-running *unit* tests (if there should ever be such a thing)
should not be shunted off to another location. Keep the unit tests with the
unit. Integration tests are a different matter, because they cross multiple
units. Likewise, tests for project policies.
Andrew's response:
>
>
> *The nature of the test is important here: it's not a test of Juju
> functionality, but a test to ensure that we don't accidentally use a TLS
> configuration that doesn't match our project-wide constraints. It's static
> analysis, using the test framework; and FWIW, the sort of thing that Lingo
> would be a good fit for.*
>
> *I'd suggest that we do organise things like this separately, and run them
> as part of the "scripts/verify.sh" script. This is the sort of test that
> you shouldn't need to run often, but I'd like us to gate merges on.*
>
> So, I don't really think the method of testing should determine where a
> test lives or how it is run. I could test the exact same things with a
> more common unit test - check the tls config we use when dialing the API is
> using tls 1.2, that it only uses these specific ciphersuites, etc. In
> fact, we have some unit tests that do just that, to verify that SSL is
> disabled. However, then we'd need to remember to write those same tests
> for every place we make a tls.Config.
>
The method of testing is not particularly relevant; it's the *purpose* that
matters. You could probably use static analysis for a lot of our units; it
would be inappropriate, but they'd still be testing units, and so should
live with them.
The point I was trying to make is that this is not a test of one unit, but
a policy that covers the entire codebase. You say that you don't want to it
put them "somewhere else", but it's not at all clear to me where you think
we *should* have them.
> The thing I like about having this as part of the unit tests is that it's
> zero friction. They already gate landings. We can write them and run them
> them just like we write and run go tests 1000 times a day. They're not
> special. There's no other commands I need to remember to run, scripts I
> need to remember to set up. It's go test, end of story.
>
Using the Go testing framework is fine. I only want to make sure we're not
slowing down the edit/test cycle by frequently testing things that are
infrequently going to change. It's the same deal as with integration tests;
there's a trade-off between the time spent and confidence level.
> The comment about Lingo is valid, though I think we have room for both in
> our processes. Lingo, in my mind, is more appropriate at review-time,
> which allows us to write lingo rules that may not have 100% confidence.
> They can be strong suggestions rather than gating rules. The type of test
> I wrote should be a gating rule - there are no false positives.
>
> To give a little more context, I wrote the test as a suite, where you can
> add tests to hook into the code parsing, so we can trivially add more tests
> that use the full parsed code, while only incurring the 16.5 second parsing
> hit once for the entire suite. That doesn't really affect this discussion
> at all, but I figured people might appreciate that this could be extended
> for more than my one specific test. I certainly wouldn't advocate people
> writing new 17 seconds tests all over the place.
>
That sounds lovely, thank you.
Cheers,
Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju-dev/attachments/20160428/a01c684b/attachment-0001.html>
More information about the Juju-dev
mailing list