[apparmor] some urgent questions

Seth Arnold seth.arnold at gmail.com
Sun Feb 13 23:56:15 UTC 2011


On Sun, Feb 13, 2011 at 7:13 AM, alexofen at gmail.com <alexofen at gmail.com> wrote:
> Hi everybody and people enthusiastic about system security,
>
> let me right away "beg you pardon" for directly asking some question.
> This is  because I have NOT found some conclusive answers by browsing the
> archives.
> ANY help and comment and hint is appreciated.

There rarely are conclusive answers. :) We all have different security goals.

> (1) concurrency vulnerability and Apparmor(AA)?
> ->your opinion, is AA safe against vulnerability arrising from execution
> concurrency in Multiprocessor environments?
> www.cl.cam.ac.uk/teaching/0809/Security/concurrency.pdf - gives a good
> introduction to this tread.

The kernel team has worked extensively to ensure that all system calls
_copy_ data from userspace before performing any operations with the
data, to ensure that silly syscall-interposition bugs don't arise. The
sparse tool can read source code that has been instrumented with
__user indicators to report when data comes from a user, and must be
run through a function to copy the data into the kernel before it can
be safely used. I'm not sure how often 'sparse' is run, but hopefully
often enough to find accidental mistakes.

> (2) What is the deal with the complain(I), enforced(II) ,
> "not-yet-enabled"(III) states a executable can be in?
> So to say a root executed executable not having a profile is allowed
> everything, right?
> Im a sorry for this stupid question, but as I understand AA is not build
> according to the
> "everything that is not exprlecitely allowed is forbidden" but rather
> "everything that is not exprlecitely forbidden is allowed", true?

It's a little bit of both: in the context of a confined process,
AppArmor is a whitelist solution (with a 'deny' keyword to both silent
known access attempts and to tighten policy by subtracting allowed
accesses; still, only listed items are allowed). However, AppArmor is
a lot like a blacklist from a system perspective: you as an
administrator may distrust specific programs, daemons, or users, and
wish to confine them specifically. This could be as small as just
confining your nginx webserver, or it could be as comprehensive as
confining nearly every process on the system. It's up to you to
determine your levels of distrust. But 'init', kernel threads, and so
forth will almost certainly never be confined by AppArmor policy.

If you really want an entire-system security policy, then one of the
other models such as TOMOYO, SELinux (without the 'unconfined' domain,
obviously :), or SMACK may be a better fit.

I personally confine every program before I use it to contact machines
on the internet, off my local network: firefox, transmission,
rtorrent, ping, traceroute, chromium, xpdf, FoldingAtHome, etc. I
don't have a profile for wget, I probably should.. I also confine
every program with a long-lived listen() socket that listens on my
local network: ushare, nginx, cups. I also confine everything I
install that doesn't come from Ubuntu: briss, new window managers I
try out, Io programming language. I also confine mplayer and vlc
because their code is hideous. But all my shells are unconfined, I
trust me. :)

> (3) Paranonia, do you think the LSM /security part of the linux kernel is
> "watched" and regularily audited to not
> have a NSA , secret service backdoor? This more general is a concern I am
> not having any idea to address because
> only by being "open" the source does not manditorily need to have some
> people with "good intentions" watching/checking it?
> Actually I expect most code not to be audited and feel at loss to the
> "volume" making it impossible to check it myself.
> any suggestions here?

The actual LSM interface is probably too simple to have a backdoor; it
is simply a pile of strategically placed callbacks, and the main Linux
kernel code calls into LSM for decisions.

Whether any given LSM module has backdoors is much harder to gauge. I
personally know the development teams for AppArmor, SELinux, TOMOYO,
and SMACK, and would happily trust my security to any of them, but I
realize I am in a privileged position to have met and worked with them
long enough to build a huge amount of trust in them personally. The
best someone else can do is audit the source code and look for bugs.
It's an undertaking :) but one that is _very_ useful. However, even
the "best" LSM module cannot protect against bugs in the kernel, and
boy the kernel does have bugs. If your security environment requires
that even bugs in the kernel cannot violate your security policy, then
you must deploy a tool more like GEMSOS: http://www.aesec.com/
Granted, bugs in GEMSOS are bugs, but the kernel is _small_ and could
be completely audited if you needed.

Bugs in the rest of the Linux kernel are just as vital to squash as
bugs in the security modules. It all runs in the same protection
domain.

> (4) the Apparmor in Ubuntu 10.10 regular install and its profiles are not
> "very develloped" right?
> Maybe somebody can comment on this, it would help me evaluate if what I see
> on a ordinary Ubuntu install is already safe?
> I actually do not think so as I would doubt the distributors sacrificed
> "problem-free-delployment-distro" for less safe. Hence
> not very harsh rules to not risk "problems". Any comment would help

The profiles in Ubuntu are _very_ developed, but they probably have
very different goals than you do. :)

The philosophy behind the Ubuntu profiles is roughly: "Allow
everything that a user _might_ do or _might_ configure." Some
assumptions are made, e.g., firefox should never modify ssh keys. But
if a user clicks on a .doc file in firefox, they will probably expect
openoffice to start with a copy of the doc, ready for editing and
saving anywhere. So the firefox profile contains a _huge_ number of
"ux" (unconfined) transitions for helper programs. Obviously, for
something like a DHCP server or NTP daemon, it is much easier to
provide a tight profile than for something as huge and unwieldy as
firefox.

I understand why Ubuntu has several very permissive profiles: having
the profiles makes viruses or worms harder to write, as the tools
available to a worm author are drastically reduced, but the average
user will never see reduced functionality.

I do not like the default firefox profile from Ubuntu. I edited the
profile and removed all the "ux" rules. I do not trust firefox enough
to have "ux" rules. Thankfully, it is very easy to do :) but it does
require you to make a choice: either confine all the helper programs
you _do_ want, or decide that you will start those helper programs
manually when you do want to start one of them unconfined.

You can use the dpkg-divert(8) command to specify that you never want
to allow a package upgrade to start the "a package update
configuration file is different from your local configuration file:
(i)nstall from package, (k)eep local, (d)iff" routine. I used to try
to follow the changes in profiles provided by the upstream Ubuntu
developers, but it was always for a helper or another that I just
don't care to even allow running in the first place. (And since the
diff is provided against the currently installed file, rather than
against the last one provided by Ubuntu, it's a real PITA to figure
out what changed, since 95% of the diff lines are ones _I_ put in
place four months earlier.)

It's fast enough to generate your own profiles that if you wanted to
start from scratch with NO policy, it's pretty easy to do. (I've done
it dozens of times over the years. :) You might want to keep the
abstractions around: most of the abstractions are well-written, but
the 'evince' and 'ubuntu-*' abstractions grant way more access than I
would ever want 'hidden' in an abstraction. Thankfully it is very easy
to modify AppArmor policy to do exactly what you want it to do. :)

I hope this helps address your concerns.



More information about the AppArmor mailing list