First kernel upload for gutsy...priceless

Ben Collins ben.collins at
Sun May 6 07:19:30 UTC 2007

On Fri, 2007-05-04 at 15:01 +0100, Matt Zimmerman wrote:
> On Thu, May 03, 2007 at 07:52:48AM -0700, Ben Collins wrote:
> > On Thu, 2007-05-03 at 12:33 +0100, Matt Zimmerman wrote:
> > > First, what's the rationale for splitting linux-ubuntu-modules into a
> > > separate source package?  This requires an additional upload for every
> > > kernel ABI change, including some security updates, which is something we
> > > originally tried to minimize by bringing drivers in-tree.  I think we're now
> > > up to four (kernel, ubuntu, restricted, meta).
> > 
> > The rationale is that it gives us clear separation of our stock kernel
> > code and our third party drivers (both in maintenance and in bug
> > tracking).
> How does it help for maintenance?  Is it because you then use separate git
> trees?
> > We had talked about this quite awhile back, with Kees and Martin, and
> > they both agreed that one more package would not be a problem at all.
> I worry that they are too polite to complain. ;-)

I worry too :)

But, since we only get the ABI bump, and thus a full source+lrm+lum+lbm
+meta upload on the average of over 6 months, it shouldn't be too much
overhead. Currently we have 3 dependent packages (lrm, lbm and meta), so
one more really isn't adding much.

> > > I'm also concerned about the possiblity for users to end up without
> > > linux-ubuntu-modules, losing functionality.  The ipw2200 firmware, for
> > > example, is in this package, and without it the driver fails in mysterious
> > > (to a desktop user) ways.  The metapackages are convenient, but they have
> > > not been a panacea, and we already have occasional problems of this type
> > > with l-r-m.
> > 
> > 
> > The linux-image-generic package will depend on the matching kernel image
> > and ubuntu modules.
> > 
> > This means that it is impossible for someone to perform an upgrade from
> > feisty and get linux-image-2.6.22-1-generic, but not get
> > linux-ubuntu-modules-2.6.22-1-generic. There is nothing to cause the
> > kernel image to get updated without also bringing in the modules.
> > 
> > For them to get linux-image-2.6.22-1-generic without
> > linux-ubuntu-modules-2.6.22-1-generic, would mean they did not have
> > linux-image-generic installed, and manaually installed the kernel image.
> As I noted above, we have already seen problems like this in the field.
> Users do end up without the metapackages installed, and with unexpected
> combinations of packages.  Users can and do manually install packages for
> one reason or another.  I did it with 2.6.22-1-generic, and if I hadn't
> happened to know first-hand what you were up to, I would have been very
> confused indeed.
> A user who has linux-image-2.6.20-15-generic won't realize that
> linux-image-2.6.22-1-generic is something quite different from a later
> version of the same tree.  We should always be very careful when changing
> the semantics of a package without changing its name, because it will clash
> with user expectations.  In this case, the price of unmet expectations can
> be, at worst, a system which doesn't boot.
> The only way to reliably avoid inconsistency is with strict dependency
> relationships.  If foo 1.0 is equivalent to foo 1.1 + bar 1.1, it's
> appropriate for foo 1.1 to depend on bar.  Consider dpkg and dselect for an
> example of a similar situation, and the care with which it was handled.

We get this from release to release even without creating the problems
ourselves :) I was looking at adding a Recommends for
linux-ubuntu-modules to the linux-image packages to help alleviate this.

> > > What happens if the patch fails to apply, or the patched source fails to
> > > build?  How does it affect the build time for kernel uploads?
> > 
> > In order to get these patches in, I required that they be backed by our
> > community both in some measurable man power and in infrastucture. So
> > bugs on these packages, and patching issues that arise in them will be
> > handled by these teams (Ubuntu Studio and ubuntu-xen for example). If
> > issues can't be fixed, then the flavour will get disabled. Once we get
> > near release, we will evaluate the flavour for whether or not it will
> > remain for release.
> This is disconcerting to me.  Once these patches are incorporated into a
> stable release, Canonical is committed to supporting them even if the teams
> behind them wane and disappear.  If a security patch conflicts with one of
> them, this could seriously delay the testing and release of critical
> updates.
> Wouldn't it be wiser to keep these separate?  If these secondary flavours
> were provided by another source package in universe, this would avoid
> blocking core development.

If we kept them separate, it increases the amount of effort to do
security uploads, since _every_ upload of linux-source (with or without
ABI bump) would require us to rebuild linux-xen, linux-openvz, linux-rt,
and that's much more overhead than maintaining the diff in our tree. Not
to mention, if these packages move to main, we are committing ourselves
just the same. Keeping them in the main tree actually helps to maintain

> > Since the new build setup will decrease the over all time for the
> > package build (last upload did not include concurrency support to take
> > advantage of the SMP buildd's, but next one will), I think the time we
> > save there makes up for the extra time to build the extra kernels.
> If we've gained extra time in the build, shouldn't we take advantage of it
> to make development less strenuous, with faster builds for kernel
> developers, rather than spending it on more kernel flavours in the main
> tree?

Kernel developers are doing just "fakeroot debian/rules binary-generic",
so they wont have to do this entire process. I personally do full builds
on my systems to ensure that all flavours build, so the rest of the
developers don't need to do that.

So that are getting benefits.


More information about the kernel-team mailing list