AppArmor and Upstart

John Johansen john.johansen at
Thu Dec 22 00:12:08 UTC 2011

Hash: SHA512

On 12/21/2011 11:52 AM, Jamie Strandboge wrote:
> (Please keep me in CC, I am not subscribed to the list. CC'ing Kees as
> well, since he did most of the existing work on this.)
> Hi,
> James asked me to send something to the list on why AppArmor is not
> currently converted to upstart in Ubuntu.
> == Background ==
> AppArmor[1] is a Mandatory Access Control (MAC) system for Linux, and is
> used extensively in Ubuntu. It consists of an administrator or
> distribution defined policy for specified binaries. The policy is
> defined in human-readable text files in /etc/apparmor.d. This
> human-readable policy is compiled and loaded into the kernel via the
> apparmor_parser command. For policy to be in effect for a binary, the
> policy for that binary must be loaded into the kernel prior to the
> binary's execution. Policy can be reloaded with the new policy put into
> effect during runtime if the binary already has policy attached to it
> (ie, it started after policy was loaded for it). Depending on the policy
> being compiled, it can take up to several seconds to compile the policy
> then load it into the kernel. Policy must be compiled anew when the
> policy or the kernel changes. To improve boot speed, binary caches of
> the compiled policy are used in Ubuntu (/etc/apparmor.d/cache) and the
> binary policy is loaded into the kernel as fast as the cache can be read
> from disk. Updating of the cache happens automatically after
> compilation.
> == Current State ==
> Currently in Ubuntu, policy load happens in 2 stages:
> 1. within the job of an upstartified package (eg cups)
> 2. via the SysV initscript (/etc/init.d/apparmor) which is required for
> packages that have AppArmor policy, but not a job file (eg evince)
> For upstartified packages with an apparmor profile, the job file must
> load the AppArmor policy prior to execution of the confined binary (see
> above). As a convenience, the /lib/init/apparmor-profile-load helper is
> provided to simplify upstart integration.
> For packages that ship policy but do not have a job file, policy must be
> loaded sometime before application launch, which is why stage 2 is
> needed. Stage 2 will (re)load all policy.
> On Ubuntu, in both stage 1 and stage 2, binary caches are used unless it
> is determined that policy must be recompiled (eg, booting a new
> kernel). 
> Some applications are corner cases, like dhclient. Because it needs to
> have its policy loaded extremely earlier in the boot process, the
> network-interface-security.conf is used to ensure this happens.
> == Considerations ==
>  * the second stage reloads policy that may have already been loaded in
> stage 1. In practice, this is not a problem because not that much policy
> is loaded in upstart jobs and the reload of policy is always done using
> the cache (since stage 1 will update the cache automatically as needed).
>  * there is a race condition between when the apparmor SysV initscript
> runs and when people might be using applications (ie, there is no
> guarantee that /etc/init.d/apparmor will run before a user starts
> evince). In practice, this does not tend to happen
>  * people enabling the apparmor-profiles package that use policy that
> confines applications with upstart jobs do not have the policy loaded
> via the upstart jobs. Classic example is smbd. This can be fixed in the
> apparmor-profiles package by also shipping an upstart job to load the
> profile (or via the samba packaging, like with avahi)
>  * people defining new policy for applications with upstart jobs need to
> not only create the policy, but update the upstart job to load it. This
> is not easily discoverable, and a pain point for administrators
> Due to concerns with boot speed, it was decided that this policy load
> should not happen during early boot, but rather 'as late as possible'.
> The use of binary caches helped significantly here, but policy is
> recompiled on the first boot of a new kernel and loading policy early
> will delay time to login screen by several seconds or more.
> == Going forward ==
> This has been discussed many times.
> The status quo works 'ok' at the distribution level. If we ship policy
> for an upstartified application, then we adjust the job accordingly. The
> dhclient corner case is well tested, and the SysV initscript works well
> enough in practice. There is still the theoretical race condition as
> well as the non-discoverability for the administrator.
> Perhaps the best solution is for the kernel to send a notification to
> userspace on exec (ie, something upstart could use) and userspace would
> load policy on demand. This would eliminate all race conditions and any
> need for loading policy in individual jobs. Updating the kernel in this
> manner seemed both intrusive and a performance bottleneck and was deemed
> not viable. 
> We could make the apparmor helper for upstart an integral part of
> Upstart such that when a job is started, Upstart automatically loads
> policy for the executable. This is an interesting option, but seems to
> require considerable work. It solves the non-discoverability problem as
> well as time on distribution integration work, but does not obviate the
> need for the second stage.
> We could move the second stage to upstart. This could work but it
> doesn't solve the non-discoverability problem and due to the
> event-driven nature of Upstart, it is not clear how this job would be
> declared such that it happens after early boot but before login (any
> type of login-- lightdm, gdm, kdm, console, etc, etc). Depending on how
> this is done, it may end up being a lot of work to end up basically
> where we are now with the SysV initscript.
> So, in a nutshell, while improvements can be made, it is not clear that
> the effort involved improves the situation enough to justify it. I could
> of course be missing something. :)
> Hope this helps!
> [1]

Well Jamie missed my preferred solution, which is load all policy early
in one go.  This fixes the race conditions, and simplifies the logic.

Policy can still be compiled and loaded after boot if a user installs
new packages etc.

This solution gets rid of the split load and avoids needing logic to
load/unload policy when services are run.

The split load came about because of need trying to reduce boot impact
at all cost.  However we have made several improvements over the last
few cycles that reduces our startup costs.
- - we cache precompiled policy
- - the compiler is a lot faster than it was when the split was made
- - the compiler uses less memory than when the split was made
- - the compiler can produce policy that is smaller than when the
  split was made.
- - the compiler can load multiple policies in one invocation, avoiding
  the need for it to be reinvoked for each profile

As long as we have a pre built cache, load time is fractions of a

To implement this solution we need to
- - improve the policy cache, so that it can support more than one
  kernel/feature set.  This should remove the need to recompile when
  booting between incompatible kernels.
- - improve dh_apparmor to precompile and cache for multiple kernels instead
  of just the current kernel
- - Fall back to compiling on boot, if for some reason the cache is broken.
  This shouldn't happen unless something has gone very wrong at package
  install time so I think slowing down boot for this one case is probably

  If this isn't okay we could always fall back to a late compile and load
  which is racy.
- - And we should probably fix the load scripts to just invoke the parser
  once instead of once for each profile.

What ever the solution chosen, its really just a matter or doing it.
As Jamie said its been a matter of the status quo being "good enough" that
other work has been higher priority so far.
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla -


More information about the upstart-devel mailing list