Feedback on the Ubuntu kernel for server and cloud
Scott Moser
smoser at ubuntu.com
Tue Mar 29 14:24:44 UTC 2011
On Tue, 29 Mar 2011, Loïc Minier wrote:
> On Tue, Mar 29, 2011, John Johansen wrote:
> > What are we doing right?
> > What are we doing wrong?
> > What configs should we change?
> > What new features should we included?
>
> Very EC2 centric, and most are already known, but I'll repeat my pet
> issues here for completeness:
> * some modules are purposedly not shipped in the virtual kernel to save
> space, I'd personally prefer if Ubuntu kernel features were identical
> inside and outside the cloud, but I wouldn't mind if I had to install
> an extra package to get modules which would have been striped out;
> see LP #732046
The -virtual kernel was originally created as a "sub-flavour"
explicitly to address the size issue. my natty laptop here shows:
$ du -hs /lib/modules/$(uname -r)
133M /lib/modules/2.6.38-7-generic
where ec2 instance of natty shows:
$ du -hs /lib/modules/$(uname -r)
22M /lib/modules/2.6.38-7-virtual
Its actually not a big deal for ec2 images, the default root filesystem
is either 8G or 10G depending on instance-store or ebs, but that 110M
makes a real difference if you're trying to make smaller images, and
actually in the end affects network usage wherever they're transferred.
it is almost completely worthless to install several of the modules in a
"virtual" instance. Ie, 802.11 drivers or scsi adapter or other drivers
just waste space.
I also would prefer the '-extra' modules for -virtual,
> * would be nice if we weren't using GRUB 1 anymore; it's a bit
> confusing to end up with both in instances
We really dont' have a choice here. 'pv-grub' is what runs in EC2 and
reads /boot/grub/menu.lst, that is really the only option for a sane
boot loader in xen guests running without HVM. Someone could dedicate
resources to getting grub2 running inside paravirt xen, and I'm sure the
world would be happy for it, but at this point grub-0.97 like behavior is
all we have.
> * I didn't test on natty, only on maverick, but it was quite manual to
> install a different kernel flavor in an EC2 instance and get it
> picked up by pv-grub
This is completely a EC2 issue, not relevant in other "virtual"
scenarios. It is due to decisions I made in grub-legacy-ec2.
We wanted the images to work both in
UEC or other places (where grub2 is used) and in EC2. So, I added this
package that did not conflict with grub2 and managed /boot/grub/menu.lst
In lucid, the images had 2 kernels (-ec2 and -virtual), and on EC2, the
-virtual kernel would *never* boot on EC2. I wanted to support
upgrading from lucid to maverick, so I added a whitelist in
/usr/sbin/update-grub-legacy-ec2 (look for 'is_xen_kernel' if you're
interested). There are definitely improvements that could be made to what
is there, but figuring out what exactly is going to work on EC2 is not the
easiest thing to do. Thus, the whitelist.
The one safe-guard I added for my mistakes is that if a program
'is_xen_kernel' already exists in the path, then it will be used rather
than the built in. If you write a script that takes a path to a kernel,
and exits 1 or 0 based on if it is bootable in EC2, then
'update-grub-legacy-ec2' will take advantage of it.
I just had the thought now, that I could probably lay some scripts down
in /etc/grub.d that could manage /boot/grub/menu.lst and ditch the
update-grub-legacy-ec2 package. I dont know that that would be any less
confusing though.
Basically, theres not a lot we can do about needing menu.lst on EC2.
> * the server kernel in maverick (not the virtual one) was lacking some
> features to work as a complete drop in replacement for the virtual
> one and provide Xen console, reboot etc. -- I didn't test in natty
>
> I'm confident that some of these have been resolved in natty and I
> apologize for not having tested the above with natty.
>
> On a happy note, I found it a real pleasure to use EC2 images; good
> work!
More information about the ubuntu-server
mailing list