Static Analysis tests
Nate Finch
nate.finch at canonical.com
Thu Apr 28 03:14:41 UTC 2016
>From the other thread:
I wrote a test that parses the entire codebase under github.com/juju/juju to
look for places where we're creating a new value of crypto/tls.Config
instead of using the new helper function that I wrote that creates one with
more secure defaults. It takes 16.5 seconds to run on my machine. There's
not really any getting around the fact that parsing the whole tree takes a
long time.
What I *don't* want is to put these tests somewhere else which requires
more thought/setup to run. So, no separate long-tests directory or
anything. Keep the tests close to the code and run in the same way we run
unit tests.
Andrew's response:
*The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.*
*I'd suggest that we do organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.*
So, I don't really think the method of testing should determine where a
test lives or how it is run. I could test the exact same things with a
more common unit test - check the tls config we use when dialing the API is
using tls 1.2, that it only uses these specific ciphersuites, etc. In
fact, we have some unit tests that do just that, to verify that SSL is
disabled. However, then we'd need to remember to write those same tests
for every place we make a tls.Config.
The thing I like about having this as part of the unit tests is that it's
zero friction. They already gate landings. We can write them and run them
them just like we write and run go tests 1000 times a day. They're not
special. There's no other commands I need to remember to run, scripts I
need to remember to set up. It's go test, end of story.
The comment about Lingo is valid, though I think we have room for both in
our processes. Lingo, in my mind, is more appropriate at review-time,
which allows us to write lingo rules that may not have 100% confidence.
They can be strong suggestions rather than gating rules. The type of test
I wrote should be a gating rule - there are no false positives.
To give a little more context, I wrote the test as a suite, where you can
add tests to hook into the code parsing, so we can trivially add more tests
that use the full parsed code, while only incurring the 16.5 second parsing
hit once for the entire suite. That doesn't really affect this discussion
at all, but I figured people might appreciate that this could be extended
for more than my one specific test. I certainly wouldn't advocate people
writing new 17 seconds tests all over the place.
-Nate
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju-dev/attachments/20160428/6da12d20/attachment.html>
More information about the Juju-dev
mailing list