First kernel upload for gutsy...priceless

Matt Zimmerman mdz at ubuntu.com
Fri May 4 14:01:07 UTC 2007


On Thu, May 03, 2007 at 07:52:48AM -0700, Ben Collins wrote:
> On Thu, 2007-05-03 at 12:33 +0100, Matt Zimmerman wrote:
> > First, what's the rationale for splitting linux-ubuntu-modules into a
> > separate source package?  This requires an additional upload for every
> > kernel ABI change, including some security updates, which is something we
> > originally tried to minimize by bringing drivers in-tree.  I think we're now
> > up to four (kernel, ubuntu, restricted, meta).
> 
> The rationale is that it gives us clear separation of our stock kernel
> code and our third party drivers (both in maintenance and in bug
> tracking).

How does it help for maintenance?  Is it because you then use separate git
trees?

> We had talked about this quite awhile back, with Kees and Martin, and
> they both agreed that one more package would not be a problem at all.

I worry that they are too polite to complain. ;-)

> > I'm also concerned about the possiblity for users to end up without
> > linux-ubuntu-modules, losing functionality.  The ipw2200 firmware, for
> > example, is in this package, and without it the driver fails in mysterious
> > (to a desktop user) ways.  The metapackages are convenient, but they have
> > not been a panacea, and we already have occasional problems of this type
> > with l-r-m.
> 
> 
> The linux-image-generic package will depend on the matching kernel image
> and ubuntu modules.
> 
> This means that it is impossible for someone to perform an upgrade from
> feisty and get linux-image-2.6.22-1-generic, but not get
> linux-ubuntu-modules-2.6.22-1-generic. There is nothing to cause the
> kernel image to get updated without also bringing in the modules.
> 
> For them to get linux-image-2.6.22-1-generic without
> linux-ubuntu-modules-2.6.22-1-generic, would mean they did not have
> linux-image-generic installed, and manaually installed the kernel image.

As I noted above, we have already seen problems like this in the field.
Users do end up without the metapackages installed, and with unexpected
combinations of packages.  Users can and do manually install packages for
one reason or another.  I did it with 2.6.22-1-generic, and if I hadn't
happened to know first-hand what you were up to, I would have been very
confused indeed.

A user who has linux-image-2.6.20-15-generic won't realize that
linux-image-2.6.22-1-generic is something quite different from a later
version of the same tree.  We should always be very careful when changing
the semantics of a package without changing its name, because it will clash
with user expectations.  In this case, the price of unmet expectations can
be, at worst, a system which doesn't boot.

The only way to reliably avoid inconsistency is with strict dependency
relationships.  If foo 1.0 is equivalent to foo 1.1 + bar 1.1, it's
appropriate for foo 1.1 to depend on bar.  Consider dpkg and dselect for an
example of a similar situation, and the care with which it was handled.

> > What happens if the patch fails to apply, or the patched source fails to
> > build?  How does it affect the build time for kernel uploads?
> 
> In order to get these patches in, I required that they be backed by our
> community both in some measurable man power and in infrastucture. So
> bugs on these packages, and patching issues that arise in them will be
> handled by these teams (Ubuntu Studio and ubuntu-xen for example). If
> issues can't be fixed, then the flavour will get disabled. Once we get
> near release, we will evaluate the flavour for whether or not it will
> remain for release.

This is disconcerting to me.  Once these patches are incorporated into a
stable release, Canonical is committed to supporting them even if the teams
behind them wane and disappear.  If a security patch conflicts with one of
them, this could seriously delay the testing and release of critical
updates.

Wouldn't it be wiser to keep these separate?  If these secondary flavours
were provided by another source package in universe, this would avoid
blocking core development.

> Since the new build setup will decrease the over all time for the
> package build (last upload did not include concurrency support to take
> advantage of the SMP buildd's, but next one will), I think the time we
> save there makes up for the extra time to build the extra kernels.

If we've gained extra time in the build, shouldn't we take advantage of it
to make development less strenuous, with faster builds for kernel
developers, rather than spending it on more kernel flavours in the main
tree?

-- 
 - mdz




More information about the kernel-team mailing list