L-o-n-g delay for rc.local in systemd on Ubuntu.

Ralf Mardorf silver.bullet at zoho.com
Wed Aug 9 14:05:23 UTC 2017


On Wed, 09 Aug 2017 20:39:08 +1000, Karl Auer wrote:
>On Wed, 2017-08-09 at 12:21 +0200, Ralf Mardorf wrote:
>> On Wed, 09 Aug 2017 10:22:06 +0200, Xen wrote:  
>> > I don't know why something would take forever just because it is
>> > in rc.local.  
>> There are two more likely possibilities.
>> 
>> 1. A race conditions related to the startup process
>> 2. An issue that isn't related to the startup process at all  
>
>In my experience (unscientific) loooong delays in startup are almost
>always caused by one of two things - a dying disk, or a network
>problem.

I was thinking about a network issue, too, but OTOH startup could
finish, even while the network isn't ready. It depends on the kind and
usage of the network. My Linux are on SSDs, just starting a few services
and startup likely takes less than 5 seconds, maybe <= 3 seconds,
while my router might take a minute, if it was turned off and needs to
establish everything. However, startup finishes, I could start a user
session, but then need to wait for the router, assumed I should need
Internet access. Assuming the OP should need a kind of Network access
to finish the startup, this is something the OP should mention.
However, I doubt that rc.local would be the appropriate place to
establish such a critical network, even when using upstart instead of
systemd, since rc.local is aimed for stuff without runlevels and there
are recommendations to avoid race conditions, by using the sleep-command
with values of several seconds. This does add artificial delays much
longer, than an "extreme lightweight" Linux nowadays needs to boot, when
installed on SSD. A dying disk usually leaves information provided by
the startup messages. Even if nothing points directly to the broken
HDD, I doubt that it would only affect services in rc.local, it more
likely would affect other services, too.

We should keep in mind, that the OP does use a VM, but not what VM.
Virtualbox using qcow instead of a vdi, stored on a SATA 2 HDD at least
did cause issues for me. They were blown away, when I migrated to a
SATA 3 SSD.





More information about the ubuntu-users mailing list