[Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems
Eric Desrochers
eric.desrochers at canonical.com
Tue Mar 20 14:33:59 UTC 2018
** Changed in: lxd (Ubuntu Trusty)
Status: New => Invalid
** Changed in: lxd (Ubuntu Xenial)
Status: New => Invalid
** Changed in: lxd (Ubuntu Artful)
Status: New => Invalid
** Changed in: avahi (Ubuntu Trusty)
Status: New => In Progress
** Changed in: avahi (Ubuntu Xenial)
Status: New => In Progress
** Changed in: avahi (Ubuntu Artful)
Status: New => In Progress
--
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1661869
Title:
maas install fails inside of a 16.04 lxd container due to avahi
problems
Status in MAAS:
Invalid
Status in avahi package in Ubuntu:
In Progress
Status in lxd package in Ubuntu:
Invalid
Status in avahi source package in Trusty:
In Progress
Status in lxd source package in Trusty:
Invalid
Status in avahi source package in Xenial:
In Progress
Status in lxd source package in Xenial:
Invalid
Status in avahi source package in Artful:
In Progress
Status in lxd source package in Artful:
Invalid
Bug description:
[Original Description]
The bug, and workaround, are clearly described in this mailing list thread:
https://lists.linuxcontainers.org/pipermail/lxc-
users/2016-January/010791.html
I'm trying to install MAAS in a LXD container, but that's failing due
to avahi package install problems. I'm tagging all packages here.
[Issue]
Avahi sets a number of rlimits on startup including the maximum number of processes (nproc=2) and limits on memory usage. These limits are hit in a number of cases - specifically the maximum process limit is hit if you run lxd containers in 'privileged' mode such that avahi has the same uid in multiple containers and large networks can trigger the memory limit.
The fix is to remove these default rlimits completely from the
configuration file.
[Impact]
* Avahi is unable to start inside of containers without UID namespace isolation because an rlimit on the maximum number of processes is set by default to 2. When a container launches Avahi, the total number of processes on the system in all containers exceeds this limit and Avahi is killed. It also fails at install time, rather than runtime due to a failure to start the service.
* Some users also have issues with the maximum memory allocation causing Avahi to exit on networks with a large number of services as the memory limit was quite small (4MB). Refer LP #1638345
[Test Case]
* setup lxd (apt install lxd, lxd init, get working networking)
* lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
* lxc exec avahi-test sudo apt install avahi-daemon
This will fail if the parent host has avahi-daemon installed, however,
if it does not you can setup a second container (avahi-test2) and
install avahi there. That should then fail (as the issue requires 2
copies of avahi-daemon in the same uid namespace to fail)
[Regression Potential]
* The fix removes all rlimits configured by avahi on startup, this is
an extra step avahi takes that most programs did not take (limiting
memory usage, running process count, etc). It's possible an unknown
bug then consumes significant system resources as a result of that
limit no longer being in place, that was previously hidden by Avahi
crashing instead. However I believe this risk is significantly
reduced as this change has been shipping upstream for many months and
have not seen any reports of new problems - however it has fixed a
number of existing crashes/problems.
* The main case this may not fix the issue is if they have modified
their avahi-daemon.conf file - but it will fix new installs and most
installs as most users don't modify the file. And users may be
prompted on upgrade to replace the file.
[Other Info]
* This change already exists upstream in 0.7 which is in bionic. SRU
required to artful, xenial, trusty.
To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions
More information about the Ubuntu-sponsors
mailing list