Whole tree up to date before committing
Óscar Fuentes
ofv at wanadoo.es
Fri Oct 23 00:21:40 BST 2009
John Arbash Meinel <john at arbash-meinel.com> writes:
> Óscar Fuentes wrote:
>
>> The model you propose is more of a bottleneck because changes must be
>> checked sequentially by a single PQM (unless you implement something
>> like "speculative testing", where a machine assumes that patch N
>> succeeds, starts testing patch N+1 and commits after the machine that
>> was testing patch N finishes). A PQM that tests one patch at a time is
>> not fast enough for that project, even if it is comprised of several
>> machines each testing one platform in parallel.
>
> You can still go by N via a tree pattern. You just create separate
> 'integration' branches, where each integration branch tests and merges a
> patch, and then those integration branches get merged into the 'trunk'
> branch. Think of it as 'worker-queues' for integration.
This adds quite a bit of complexity. Let's suppose that after merging
two branches with N revisions each, the build fails. What specific patch
is wrong? Who is the responsible of fixing the problem? And what the PQM
does next? reshuffling the patches? doing some kind of triaging?
(sounds complex) stop everything until someone pinpoints the problematic
patch(es)?
> It does add extra 'merges' into the system. However, it does let you
> scale into many PQM machines. Also, by offloading the 'run the full test
> suite' into the automated machines, it reduces the *developer's* need to
> run the full test suite before committing.
The PQM can be implemented for subversion too, with the same effect of
offloading work from the developer.
Problems due to mid-air collisions among revisions is not
frequent. Problems while running the pre-commit build&test is very
common. Delegating this to a remote machine that does the work at some
indeterminate time is definitely wrong, IMO.
--
Óscar
More information about the bazaar
mailing list