[Oneiric-Topic] Server Boot
clint at ubuntu.com
Wed Mar 30 18:18:26 UTC 2011
On Wed, 2011-03-30 at 17:21 +0200, Alvin wrote:
> Yes, they are certainly still issues (and the primary reason the company I
> work for is abandoning Ubuntu.)
> I agree that a lot of servers are not often rebooted, but not every server is
> a webserver. Some are used only during certain hours and can be booted
> automatically (BIOS or WOL) when needed in order to keep the electricity bill
> down. Booting should be a reliable and automated process. Accurate logging is
> important in order to know what went wrong in case the unthinkable happens.
> The current boot.log looks like:
> > mount.nfs: DNS resolution failed for 192.168.xxx.3: Name or service not
> > mount.nfs4: Failed to resolve server exampleserver: Name or service not
> > mountall: mount /srv/example  terminated with status 32
> > mount error(101): Network is unreachable
> while in reality filesystems are mounted. Now, when something goes wrong, the
> log is identical. conclusion: boot.log is useless. (actually, the log is
> probably correct. it can't resolve server names at that specific time.)
> Proper boot logging would be popular.
Agreed. This was on the list of bugs in upstart that were targetted to
be fixed in natty, but it looks like it will not get done. Logging the
output of daemons is *critical* to debugging boot issues.
On another note, I believe Colin Watson added some support to the
plymouth details plugin (which is, IIRC, the default on the server) to
show upstart's starting/started events along with other console-bound
messages. Not sure if that has to be enabled manually or what, but its
worth adding to the discussion whether we want to see these by default
(IMO, we do).
> Take the following example of a server boot. Let's also assume that nothing
> goes wrong that could lead to a busybox console. (It certainly can!)
> So, you're now sitting in front of a nice prompt. Everything looks ok, but is
> it? The server mounts NFS shares from another server, it runs KVM/libvirt with
> a netfs storage pool for its virtual machines and a quasselcore for IRC that
> stores it's data on a postgresql on another server. The local filesystem uses
> mdadm for RAID1 and LVM on op of that. Very server-like. (I once made this
> setup to test some things.) In order to keep things under control, there are
> /no/ LVM snapshots. That is another ugly story.
Pretty much all of this is solved with better logging/display of the
success/failure of items during boot, since you will have some better
idea of what happened.
> So, what happens now:
> - The RAID will be broken! 
Re: mdadm.. I find our software RAID support to be quite unsatisfactory.
I think its worth focusing just on this for a session to prioritize
which bugs will be fixed for Oneiric and even suggest further bugs that
need to be fixed before the P release. I know Surbhi, in the Foundations
team, has spent some time improving mdadm quite a bit, but the bug list
is long and she hasn't gotten to everything yet.
> - The NFS shares in /etc/fstab might not be mounted, 
> even when you told the system to wait with _netdev. 
> - Your virtual machines on netfs will not be running. 
> - The quasselcore with external db will not be started. 
> The array can be assembled by running a command and all of the above daemons
> can be started manually.
> I talked about some of those topics on IRC, and the following workarounds came
> up. There are also some workarounds in the bug reports.
> - Put NFS shares in /etc/fstab, and don't configure them as netfs storage
> - Put the IP addresses of your NFS servers in /etc/hosts.
> For most servers, speeding up the boot process is less important than
> reliability. Why not take a look at how Debian does it? You can disable
> running the boot scripts in parallel with 'CONCURRENCY=none' in
I think we can achieve a reliable boot sequence with upstart without
giving up on parallelism. A critical piece of this is the logging bit
that you mentioned earlier, so that we can tell what was actually
happening when things went wrong. Please see my previous message about
fences too. I really think that most of the issues people have with the
boot stem from getting into runlevel 2 a bit early.
Also its important to mention another project that James Hunt has been
working on, which is enabling an 'interactive boot' in upstart.
Basically he has a job that, when enabled, will ask you to confirm each
starting event on the system while plymouth is running (which is right
up until the getty's start). In this way, you can walk through the boot
seeing things succeed and noting when it locks up / fails.
> Also, think about daemons of commercial software without upstart scripts. You
> never know whether they will start at boot or not.
Sure you do, they will start after the runlevel 2 event is emitted. If
there is further ordering needed.. this can still be done without a full
conversion to upstart, either by specifying new update-rc.d parameters
to start earlier/later in runlevel 2, or by adding an upstart job that
starts their service before anything needs. This is a pefectly valid
solution for inserting your sysvinit job into the upstart-only sequence
when a full conversion is too difficult:
start on starting cron
stop on stopped cron
More information about the ubuntu-server