Developer Application Criteria - Was Re: New Application processes

Jordan Mantha laserjock at
Thu Jan 8 22:49:54 GMT 2009

On Thu, Jan 8, 2009 at 1:14 PM, Bryce Harrington <bryce at> wrote:
> On Thu, Jan 08, 2009 at 11:04:35AM -0800, Jordan Mantha wrote:
>> On Thu, Jan 8, 2009 at 10:25 AM, Dustin Kirkland <kirkland at> wrote:
>> > On Thu, Jan 8, 2009 at 11:11 AM, Jordan Mantha <jordan.mantha at> wrote:
>> >> I don't think that's necessarily a logical conclusion. You're saying
>> >> that if the +1/-1 of a MOTU Council member is based on a subjective
>> >> decision that they can't use objective data in making that decision.
> I didn't read Dustin's comments that way.  I took it to mean, if the
> council members are making decisions based on objective data, they
> should be utilizing that data more consistently and predictably.

OK, I can understand that desire.

> If I understand his point correctly, it's that if I tell person A, "You
> need to hit the ball into the outfield 10 times before you can be on the
> team", and person B, "Nevermind hitting the ball, I like you, you're on
> the team," that's inconsistent.  It could also undermine the
> establishment of team mentality - A will always wonder why they had to
> meet this arbitrary critera when B didn't.  Even B may feel some
> inferiority because they got in through favoritism rather than proving
> themselves like A.  It could affect the other teammembers too - if you
> know all your teammates have proven they have the same level of
> confidence, you'll have more trust in them than you would otherwise.

OK, right. I think that is a very valid concern and something I feel
is an existing issue. However, I feel like proposal in the thread is
the way to fix that. The issue is not making things more objective but
in making the decisions more transparent. I would completely support a
push to make the MOTU Council be more verbose when issuing a -1 (and
even +1 votes). Perhaps the MOTU Council could use a voting template
where they can comment on specific areas of the application. My
objection is about trying to objectify and/or quantify an inherently
subjective process.

> The point (as I see it) is not so much whether you have to hit a ball so
> many times to get on the team, but rather if such criteria are being
> applied, that they be done consistently and predictably.  If I want to
> become a MOTU and I saw that person A had to do 10 ball hits, and I
> practice up to 12, then if I apply and they surprisingly ask me to do
> *20*, I'm obviously going to be upset.  And on the other hand, if they
> ask me to only do 5, I might not be upset but might wonder WTF is up.

In this case it may depend on whether they were balls thrown by a
Little League pitcher, pitching machine, or MLB pitcher. People
shouldn't be comparing numbers unless they have the whole picture.
However, if you had to hit 20 balls from the same pitcher that sombody
else only had to do 10 then yeah, that's an issue. That's the sort of
thing we need to address, not arguing over how many balls to hit.

> Where I think Dustin and I differ slightly, is he'd like to see the
> number more explicitly stated, whereas I'd favor being more hand-wavy
> and say, "Demonstrate that you know how to play ball", and provide some
> generic tips on how one would do that.
>> My issue is that threshold may be different on a case-by-case basis
>> and from MC member to MC member. For instance a lack of experience in
>> packaging from scratch could be compensated by a wealth of merge/sync
>> experience and vice versa.
> This is my thinking exactly.  If someone is a great pitcher, it may be
> okay if they can't hit the ball as well as others.  They know how to
> play ball.
> Or, someone may not have the greatest game play skill, but they have
> great team spirit and their presence on a team just makes that team pull
> together and work that much better.  That individual may not be able to
> hit the ball well, nor pitch, but when they're playing, the team wins much
> more often than not.  Even in this case you'd still expect them to
> demonstrate that they know how to play.

The main thing is being trustworthy with the archive (know how to
play). We don't expect people to be packaging geniuses or uber
programmers, but we expect people to know the basics and know when to
get help/ask. But of course it's not easy to quantify these
social/subjective things.

>> > I have suggested that "a minority component" of MOTU/CoreDev
>> > applications be based on some objective criteria.  In place of such a
>> > process, I also believe that Bryce's suggestion of a "workbook" would
>> > mostly serve the same purpose.
>> My problem is that this "minority component" will become the majority
>> component because it will be the only objective criteria. Bryce's
>> suggestion is probably helpful but we need to be careful about how we
>> word/suggest it.
> This would also be my concern.  When you have to make a decision on both
> factual and emotional data, and the facts are clearly stated, it can be
> easy to be mentally lazy and make a snap decision on just that set of
> data.  But I think this is not an argument against having clear facts,
> but rather an argument against being mentally lazy.  ;-)

right, but IMO having facts is different than criteria.

> But I definitely agree that care in phrasing is important.  "You're not
> going to be tested on any of this, and we certainly don't expect you to
> know *everything*, but we think the more familiar you are with the
> following, the better a MOTU/CoreDev you'll make..."

Yeah, that kind of thing is helpful for everybody.

>> A final point that I'm wondering is how often are people rejected
>> because of purely objective criteria? It seems to me that most people
>> are rejected based on things more like:
>>   * immature understanding of Ubuntu
>>   * doesn't play well with others
>>   * lacks overall packaging experience
> As someone who has not really been involved with the MOTU decisions, I'd
> love to see some data on this.  Not names or specific instances, just a
> summary of like, # times someone was explicitly critiqued/judged on
> {time involved, amount of uploads, packaging tasks done, etc.},
> regardless of whether they ended up being accepted or not.

Yeah, although I'm not sure how informative it's going to be. We have
a pretty small sample size.

> If it turns out that Dustin is correct, that a significant number of
> judgements are being made with objective data, I think it strengthens
> his argument that care be taken to do this in a consistent, uniform, and
> predictable fashion.  Or if it turns out to truly be a rarity then maybe
> it indicates that those few instances should just be investigated and/or
> corrected if possible, and sponsor docs be updated to clarify this point.

That makes sense.

>> Only the last one would have any chance of being objective but I don't
>> think that's even possible. You have things like # of packages
>> uploaded, difficulty of packaging tasks, # of mistakes, etc. What I
>> worry about it people picking the easiest things they can find in
>> order to meet some "standard" and then using that to leverage
>> themselves into MOTUship.
> Heh.  "Your karma must be >this high< to ride this attraction."
> But really, it should be pretty transparent when people do that.
> If someone gets in this way, again it's more an argument against mental
> laziness than against having clear facts.

Right. I think transparency is more what we're needing rather than a
change in process.


More information about the ubuntu-devel mailing list