Proposal for new charm policies.

Clint Byrum clint at ubuntu.com
Fri May 10 03:29:27 UTC 2013


On 2013-05-09 18:14, Marco Ceppi wrote:
> On Thu, May 9, 2013 at 5:52 PM, Clint Byrum <clint at ubuntu.com> wrote:
> 
>> Excerpts from Marco Ceppi's message of 2013-05-09 16:57:11 -0700:
>> 
>>> On Thu, May 9, 2013 at 1:51 PM, Jorge O. Castro <jorge at ubuntu.com> 
>>> wrote:
>>> 
>>>> Specifically I'd like to propose these 5 things to move into 
>>>> policy:
>>>> 
>>>> - 3 acks for charm reviews before a charm is considered "reviewed" 
>>>> in the
>>>> store.
>>>> 
>>>> Now that we're getting more mature we need to keep the charm 
>>>> quality
>>>> high. So along with landing charm testing we'd like to start doing
>>>> more peer review on incoming charms.
>>>> 
>>>> 
>>> This concerns me given the number of reviewing charmers currently 
>>> and the
>>> size of our queue on any one basis. Currently charms are just ACK'd 
>>> by one
>>> charmer and that can create review backups. Moving to three people 
>>> will
>>> potentially create quite a large entry for reviews to occur. I 
>>> completely
>>> agree that gating on reviews landing with more eyes on is a great 
>>> way to
>>> help curb our growing quality concern. What I'd like to do, if this 
>>> were to
>>> land as policy, would be to find a way to both motivate current 
>>> charmers
>>> and find a way to more or less consider charm-contributors as "Jr 
>>> Charmers"
>>> and have them providing reviews then we might have enough man power 
>>> to keep
>>> the queue quieter. I'd love to talk about these and other 
>>> suggestions
>>> during UDS.
>> 
>> Whats missing here is automated testing chiming in on merge
>> proposals. Having seen the way the OpenStack CI infrastructure helps
>> submitters avoid obvious mistakes, it makes the reviewers' job much
>> more enjoyable. You're no longer going over things with a fine 
>> toothed
>> comb, but instead reading something that appears to work.
>> 
>> That said, 3 is way too many. 2 reviewers should be enough with 
>> automated
>> testing gating things.
> 
> I completely agree. I didn't want to bring charm testing in to this
> discussion as it's separate from policy. At UDS we have a charm
> testing session where we'll discuss things like how we can adopt a
> similar model to cut down on manual review of each charms. That
> however really can't happen until we actually have a testing story for
> charms. Until then I understand the reason for wanting more reviewers
> (as a measure to ensure quality while automation and testing is
> pursued) I just want to make sure we weigh practicality with this
> demand for quality. I'd truly love to discuss how we can adopt a
> structure like OpenStack's CI (and many other projects use similar
> patterns) to meet our demands for quality.

Why would testing be separate from policy?

The point of OpenStack's CI is that it is strongly tied to policy. No 
commit hits master until it has passed all gating tests.



More information about the Juju mailing list