Bryce Harrington bryce at canonical.com
Thu Oct 23 18:11:19 BST 2008

On Thu, Oct 23, 2008 at 08:47:58AM -0700, Jordan Mantha wrote:
> On Thu, Oct 23, 2008 at 07:22:41AM -0700, Bryce Harrington wrote:
> > On Thu, Oct 23, 2008 at 12:12:18PM +0100, James Westby wrote:
> > Basically, the idea would be to have a build farm routinely scan
> > launchpad for patches and attempt to apply / build / test them.  It
> > would then mark patches PASS/FAIL appropriately for each build or test
> > run done.
> This is an interesting idea, but seems like it would take quite a lot of
> resources and would be kind of fragile. The kind of issues that come to mind are:
>   * Somebody puts up a OO.o or kernel patch, we're utilizing a lot of CPU
>     resources for a relatively low rate of return. We'd perhaps have to work out
> 	a good queue structure.
>   * Often people upload patches based on the current stable version of the
>     package, which may not apply cleanly.
>   * Often people upload patches that are not against the root of the source
>     tree, we might have to play around with patch -p to get it to apply cleanly.

When I was at OSDL we built a test/build system that did
multi-architecture builds for each commit to the kernel, across several
different branches, with several module build strategies.  The rate of
patches it handled far exceeds what we're talking about here, and this
was with hardware from about 5 years ago.  So not to minimize the
resource requirements of this idea too much, but I don't think it's the
hard part.

Regarding patches that don't apply cleanly or that are not applied to
the root, the system would obviously catch and mark those cases as FAIL;
indeed, being able to quickly flag patches with these kinds of issues is
the primary value of this tool - it gives rapid feedback to the patch
submitter and thus encourages them to redo the patch (while it's fresh
in their mind) to fix.

The hard part in such systems is generally the development work, and
solving all the corner cases.  For example, across all of Ubuntu there's
probably at least half a dozen different patching systems, and the tool
would need to be smart enough to know how to apply patches in each

> The nice thing about your proposal is that would allow us to fairly easily
> identify "good" patches. There are currently 1817 patches in Ubuntu bugs
> according to Launchpad. I doubt there are all that many that are real patches
> that apply and build cleanly, but it sure would be nice to know which ones do
> :-)
> I think as a first step we need an automated way to identify true patches
> though. It would really be a big help no matter what we do. The current patch
> flag system is really not working well (and hasn't for some time, if ever).

How would you define "true patch" in this case?


More information about the ubuntu-devel mailing list