[apparmor] Slow apparmor policy compilation
John Johansen
john.johansen at canonical.com
Wed Oct 5 22:04:55 UTC 2011
0123456789012345678901234567890123456789012345678901234567890123456789012
So this has come up again recently, and I just wanted to comment on it to
make sure people are aware of the current state.
Policy compilation can be slow, resulting in slow policy loads. We are
aware that this is still a problem despite the improvements that have been
made and there is still much work going into improving this.
For most people the single easiest fix is making sure profile caching is
turned on. Ubuntu does this by default but this may not be the case in
other distros, or if you are rolling your own.
Profile caching is done by writing a compiled binary blob out to
/etc/apparmor.d/cache (yes I realize this isn't the ideal location)
There will be one file for each profile that is cached (not all profiles
need to be cached).
To generate a cache file use
apparmor_parser -QW <profile>
You will also need to do this to update the cache file, as the
apparmor_parser will not update the cache file by default if it becomes
stale. The apparmor_parser will however skip the cache file if it detects
it is stale, and instead do a fresh compile. To do this it uses time stamps
(in a manner similar to make) on the profile file and all includes that get
pulled in.
If you want the cache file updated by default add the -W option to the
initscripts, or if you are updating to AppArmor 2.7 you can add the following
line to /etc/apparmor/parser.conf
write-cache
Note: The /etc/apparmor.d/cache directory must exist before apparmor will
write files there, even if the write-cache option is specified.
Now to the subject of compiler speed. The compiler is doing a lot of
work, so I am not going to claim we will ever get this to do be super
fast, but there are a lot of improvements in the works (some but not
all will hit the next release).
The parser has multiple phases.
parse
|
semantic check
|
exact dup removal
|
re conversion
|
re factoring
|
dfa creation
|
unreachable state removal
|
minimization
|
equivalence class generation
|
dfa compression
For any given policy one phase may be more dominant than another. Turning
off a phase may or may not result in performance improvements, it just depends
on the policy.
eg. Consider minimization it is an O(n log(n)) operation. If the policy
is structured so that re factoring and dfa creation creates a near minimum
dfa then, minimization is just extra overhead. However if there are a fair
number of states to eliminate then that means fewer states and transitions
for the compression phase, which in O(n^2).
So what are the improvements
parse - duplicate include detection
re conversion - direct conversion of aare's instead of doing an intermediate step
to pcre
re factoring - being rewritten, new algo lots of improvements
dfa creation - peak memory use reduction
dfa compression - rewrite of the packing and search routines. New algorithms,
faster, and better packing
new phase - state differential compression - will reduce dfa size and reduce number
of transitions that compression needs to examine
In addition a set of tuning knobs will be added, that will allow the parser to
turn on/off phases or parts of phases depending on whether it is heuristically
beneficial.
Overall we should see a general performance improvement in the 2-4x range, with
some policies seeing over 10x improvement.
Unfortunately this will take time, and I can't promise that all of the improvements
will hit with the next release. But at least some of them will.
More information about the AppArmor
mailing list