Reviewing - moving to review mentors, and guidelines

Wayne Witzel wayne.witzel at canonical.com
Fri Jul 18 11:17:50 UTC 2014


+1

This looks really great and well thought out. I appreciate the effort that
went in to outlining this. I think this will be extremely helpful.

Thanks,

Wayne
On Jul 17, 2014 10:28 PM, "Tim Penhey" <tim.penhey at canonical.com> wrote:

> Hi all,
>
> Last night in the team meeting we discussed the review process.
> Initially because there have been a number of things sneaking through
> reviews where ideally we'd like to catch them earlier.
>
> In order to help the newer folks on the team learn how we review, it has
> been suggested that we formalise the mentored reviewers process.
>
> Mentored Reviewers
> ==================
>
> All juju core members are expected to review code.  It is not reasonable
> to expect the newer members of the team to feel comfortable being the
> sole reviewer on work.
>
> I have created a new sheet for people to register themselves on if they
> feel they would like to have a review mentor [1].  You should discuss
> this with your team lead if you are unsure.  A mentor will be allocated
> to you, or if you have discussed this with someone in particular, you
> can have a say in your own mentor.
>
> Reviews done by the mentored developer are not considered enough to land
> code until they have been approved by their mentor.
>
> It is up the mentor and mentee how they choose to do the reviews.  They
> could do this over a hang out together, or the mentor could review the
> review after the mentee has done the first pass.
>
> Once the mentor feels that the mentee is consistently doing good
> reviews, they will announce the graduation, and that developer no longer
> needs a second review pass of their reviews.
>
>
> Guidelines
> ==========
>
> Prior to starting any significant piece of work or refactoring, the
> general approach should first be checked with the team lead and
> technical architect.  Smaller more obvious pieces of work don't
> necessarily need this, but it is often good to talk to someone about the
> problem and the proposed fix as an initial sanity check.
>
> The aim here is to avoid getting into design discussions at review time
> as this is clearly the wrong time to do it.
>
> As the developer, it is strongly suggested that you work to the boy
> scout rule - "leave the code cleaner than you found it".  We don't
> expect perfection (or we shouldn't), but don't leave a mess.  It is ok
> to push back on a reviewer if you feel that the reviewer is being
> unreasonable in their request for perfection.  If you feel that this is
> the case, talk to your team lead.
>
> As a reviewer, if you don't understand the code or what is being done
> there are a number of options:
>  * it could be that the code is confusing, getting a second opinion can
> help here
>  * it could be that you don't understand the area of code being changed,
> and it isn't always feasible to grok everything completely. If the
> change isn't obviously correct (to you), get a second opinion
>
> A review that says
>
>     The code looks good, test coverage seems to be complete, but I can't
> tell if this will actually fix problem X
>
> is still a useful review, and worth doing.  The developer should seek
> someone to understands that area, or work with the reviewer to help them
> understand.
>
> In any situation where you need to stop and explain it can seem like the
> process is too slow, but sharing and explaining is short term delay for
> long term gain as the knowledge is shared through the team.
>
>
> Checklist
> =========
>
> I think it makes sense to have a check list to cover the basics of what
> we look for in reviews.  Probably even get this into the docs directory
> in trunk.
>
> There is a document already which covers some basics of writing good
> tests: doc/how-to-write-tests.txt
>
>
> Initial things that I look for (off the top of my head):
>  * firstly work out what the code is trying to fix (feature or bug)
>  * is it obvious that the code does this?
>  * for new functions, do the names make sense? do they say what the
> function does?
>  * are all the public constants, variables, functions and methods
> documented?
>  * how are the new functions tested?
>  * can they be tested in isolation?
>  * are all the new code paths tested?
>  * do the names of the tests make sense?
>  * could similar tests be combined into a table based test?
>  * should a table based test be broken out in more obvious tests?
>   (this sometimes happens when there is too much logic or too many
> conditionals in the actual test block of the table based test)
>  * is this code valid for all operating systems?
>  * are there any race conditions?
>  * are the tests isolated from the environment?
>
>
>
> Cheers,
> Tim
>
>
> [1]
>
> https://docs.google.com/a/canonical.com/spreadsheets/d/1v9KB6Y4r1bMLOyB1JEs-wj_jBAvsq6EglWssa4cOx9c/edit#gid=0
>
> --
> Juju-dev mailing list
> Juju-dev at lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju-dev/attachments/20140718/e56c16ec/attachment.html>


More information about the Juju-dev mailing list