No subject

Fri Dec 18 02:33:18 GMT 2009

benefit from such an examination.

Currently, sponsoring an arbitrarily selected bug could require a wide
variety of skills.  There are various patching systems, programming
languages, upstream processes, testing methodologies, and so on.
Earlier in Ubuntu's history, we could count on most Ubuntu developers
being generalists (they simply had to be), but these days many of us are
specialists.  It may be that we simply have outgrown a general purpose
sponsoring system and need something a bit more refined.

The steps in a typical patch's lifecycle:

 1. Propose a patch for a given bug
 2. Test that the patch applies to the package and builds
 3. Test that the patch fixes the issue
 4. Test that the patch doesn't cause other regressions
 5. Go to 1 if any test failed
 6. Package the patch into a debdiff
 7. Review the patch for acceptability
 8. Upload patch to Ubuntu
 9. Send the patch to the appropriate upstream

Currently, each step in this process needs a human with pretty good
technical skills because there can be a lot of random corner cases.
But in a lot of cases, the work is pretty rote and paint-by-numbers.

For instance, most of the time the patch is going to apply to the
current version of Ubuntu just fine, and this is trivial to check.  So
step #2 could be done more or less automatically and kick out the 10% or
whatever amount of patches that fail, for manual review.  With that
done, a test package could be automatically built and posted into a

Steps #3 and #4 generally do require human judgment, but usually the
only skill required is knowing how to patch and install the package.
But if step #2 is done automatically and leaves a package in a PPA
somewhere, then the barrier of entry for #3 and #4 drops considerably.

Step #4 could be done more broadly and systematically, so that instead
of having to rope your friends into validating a patch, or putting out
calls-for-testing, to have an established testing team with the duty of
just installing ppas and watching for regressions.  Much like how we do
with SRUs and the ${release}-proposed queue, but for development.

Hooking in automated test frameworks at this point also is sensible,
although I don't know that we'd want to rely on them exclusively.

With steps 2-4 done, step #6 becomes just grunt work of putting the
patch into the proper patching system, committing to VCS, writing a
changelog entry, and so on.  Quite likely, a lot of this could be
scripted as well.

Step #7 definitely needs skilled human judgment.  But if steps #2-6 are
always done consistently, this step should be greatly simplified.  Ditto
step #8.

Step #9 is needed only for cases where the patch is not already
upstream.  It can be divided into cases where a) we want upstream's
feedback on the validity of the patch, and b) we want to contribute the
patch to upstream to simplify our packaging work.  #9a is really a
sub-task of step #7 and needs a skilled person to do.  

#9b is best done by the original patch submitter, although there are
cases where it is difficult or complicated to get upstream to take the
patch, or where the contributor simply does not have interest in seeing
it go upstream.  In these cases, maybe it would be nice to bubble the
upstreaming work up to a special team focused on getting package patches
taken upstream.



More information about the ubuntu-devel mailing list