[apparmor] Any plans for something like @{PROC}/@PID/ or @{PROCSELF} ?

John Johansen john.johansen at canonical.com
Tue Sep 13 23:33:25 UTC 2011


On 09/13/2011 04:03 PM, Kees Cook wrote:
> Hi,
> 
> On Sun, Sep 11, 2011 at 11:33:46AM -0700, John Johansen wrote:
>> On 09/10/2011 02:33 AM, Rob Meijer wrote:
>>> Great. What will be the notation for allowing /proc/self without
>>> allowing /proc/\d+ for anything not pointed to by /proc/self?
>>
>> Good question, there is a working assumption that we will use a
>> variable for pid/tid but the design has never been formalized.
>>
>> We need to lock down the design asap though so we can start updating
>> policy to use it.  The first pass would just be replacing the
>> current regex with a variable.
>>   @{PID]=[0-9]*
>> or what ever the regex we are currently using, and when kernel
>> vars support comes online the definition will go away and everything
>> should just work.
> 
> While I understand the need for a clean migration, I'm worried that just
> defining something like @{PID} as [1-9][0-9]* will make people think that
> initially it's just the specific process id of the confined process instead
> of "all numbers". That said, I don't know of a good way to mark it in an
> obvious way to say "this isn't actually the resolved pid".
> 
> Right now we have a mix of "*" and "[0-9]*" in the abstractions:
> 
> base:  @{PROC}/*/maps                 r,
> bash:  @{PROC}/[0-9]*/mounts          r,
> 
right I new we weren't consistent,

> I'd prefer to simply switch to @{PID} once there is a functional
> implementation of it, and then provide a backward compat mode for the
> parser that converted @{PID} into "[1-9][0-9]*" when the kernel couldn't
> handle it itself.
> 
But then people will think they have a specific process id when they have
a kernel version that can't handle it ;)

I'd rather have the policy be closer to what is intended even if we have
to implement in a broader fashion atm, but I can live with the status
quo.

Either way we need to settle on what variable names we are using




More information about the AppArmor mailing list