continuous integration/testing for python packages [Was: Is it worth back porting PEP 3147...]

Nicolas Chauvat nicolas.chauvat at
Tue Apr 27 09:01:28 BST 2010


[discussion started at
should we continue or trim some of the cc'ed lists ?]

On Mon, Apr 26, 2010 at 06:41:16PM -0400, Barry Warsaw wrote:
> On Apr 26, 2010, at 06:35 PM, Nicolas Chauvat wrote:
> >On Thu, Apr 22, 2010 at 01:52:11PM -0400, Barry Warsaw wrote:
> >> How much of the transition testing is automated?  It would be very interesting
> >> for example, to have a test framework that could run any combination of Python
> >> packages against various versions of Python, and get a report on the success
> >> or failure of it.  This may not be a project for the distros of course - I
> >> think upstream Python would be very interested in something like this.  For
> >> example, a tool that grabbed packages from the Cheeseshop and tested them
> >> against different versions would be cool.  If ever gets off the
> >> ground, that might be the best place to put something like this together
> >> (though we'd care less about OSes that aren't Debian and Ubuntu).
> >
> >Unfortunately, Logilab does not have much man-power to offer to set
> >this up at the moment, but would something like
> > fit your description of a test framework ?
> That's for continuous integration of Mercurial, right?


> >We also have it running at and of course:
> >
> >
> >
> >As you can see with these second and third links, tests include
> >lintian and piuparts checks. 
> >
> >Is it something like this that you had in mind?
> Yes.  What are you using to drive this?  I'm not really up on CI tools, but
> Hudson has been getting a lot of buzz.

We are using that is GPL
software mainly developed and maintained by Logilab, but slowly
reaching out to a larger audience.

It uses a web framework to store the information in a db and provide a
web user interface, plus slave testing bots running on one or more
hosts that get the next task from the queue, execute it and store the
results in the db.

> What I like about your display is that a failure in one area does not
> necessary mean a failure elsewhere.  That way you can better see the overall
> health of the package.

You may find interesting the following blog posts about apycot and
ways to display its information

> as nearly automatic and effortless packaging in Debian and Ubuntu.

We tried fully automatic packaging of Python programs years (8?) ago
and did not succeed for distutils and setuptools were too far away
from Debian packaging concerns.

Introducing in mypackage/ and mypackage/ all the
information needed to generate the debian/* files without the need to
modify them eventually meant more or less copying their whole content,
for their is actually not much to generate. It also meant using a less
efficient toolchain because of the added conversion step.

We moved to having tools that check the consistency of the information
provided by __pkginfo__ and debian/* files and make it easier to build
the Debian packages. These tools are

Packaging a piece of Python software now requires a bit of (easy) work 
at first, but following releases only need one or two commands. And
all the dh_python* helper scripts reduced that work even further.
> What I have in mind is defining a set of best practices, embodied as much as
> possible in tools and libraries, that provide carrots to Python developers, so
> that if they adhere to these best practices, the can get lots of benefits such
> ...
> It's things like 'python test' just working, and it has an
> impact on PyPI, documentation, release management, etc.  These best
> practices can be opinionated and simple.  If they cover only 80% of
> Python packages, that's fine.  Developers would never be forced to
> adhere to them, but it would be to their advantage to do so.

Sounds good to me :)

Nicolas Chauvat - services en informatique scientifique et gestion de connaissances  

Full link to blog posts:

More information about the ubuntu-devel mailing list