[apparmor] Apparmor rules for dconf confinement
John Johansen
john.johansen at canonical.com
Tue Jun 23 15:44:48 UTC 2015
On 06/23/2015 07:47 AM, William Hua wrote:
> Thanks John, just going to reply to your initial remarks for now.
>
>
>
>>> + /* We are going to leave space for two numbers. The first is
>>> the total + * number of bytes we are writing after the first
>>> number. This is so + * users can read the full output without
>>> reallocation. + * + * The second number is the number of data
>>> blocks we're writing. An + * application might be confined by
>>> multiple policies having data in + * the same key. */ +
>>> memset(buf, 0, 2 * sizeof(ssize_t)); + out = buf + 2 *
>>> sizeof(ssize_t);
>> use of size_t for an interface between kernel and userspace is
>> problematic. We run into some weird combinations, 32bit userspace
>> on 64bit kernel (common). And potentially other weird combos with
>> endian switching. I am not opposed to using binary data, but define
>> it exactly
>
>> u32 or u64 and whether it is little or big endian, that way we
>> avoid any chance of an issue between user and kernel space
>> communication.
>
>>> + + label_for_each_confined(i, label, profile) { + if
>>> (!profile->data) + continue; + + hash =
>>> rhashtable_hashfn(profile->data, &key, sizeof(key)); + data =
>>> rhashtable_lookup_compare(profile->data, hash, keycmp, &key); + +
>>> if (data) { + if (out + sizeof(ssize_t) + data->size > buf +
>>> buf_len) { + aa_put_label(label); + return -EINVAL; /* not
>>> enough space */ + } + memcpy(out, &data->size,
>>> sizeof(ssize_t)); + out += sizeof(ssize_t);
>> same with the size written here
>
> Ack, I had no idea this was an issue, but TIL.
>
>
>
>>> + if (profile->data) { + rht = profile->data; + profile->data =
>>> NULL; + for (i = 0; i < rht->tbl->size; i++) { +
>>> rht_for_each_entry_safe(data, next, rht->tbl->buckets[i], rht,
>>> head) {
>> this ends up leaking the data->key
>
> data->key and data->data are just weak pointers into profile->blob.
>
yep, sorry /me forgot mis remember unpack_str, as unpack_strdup
which means if you use the patch I sent then you will need to fix
it to use unpack_strdup for key instead of unpack_str
>
>
>>> @@ -668,14 +680,54 @@ static struct aa_profile
>>> *unpack_profile(struct aa_ext *e) if (!unpack_trans_table(e,
>>> profile)) goto fail;
>>>
>>> + size = unpack_blob(e, &blob, "data");
>> while you can get away with this, it forces you to mess with the
>> sub "aa_ext f" nor do you actually end up checking that your sub
>> size consumed is what the outer blob says it is.
>
>> Is there any reason we would want to allocate a single larger blob,
>> instead a blob per entry? That would keep the kmalloc smaller and
>> make it more likely to succeed with larger blobs. At a minimum we
>> should move the blob allocation to kvmalloc and kvfree which can
>> handle larger blobs of data well.
>
>> I've appended a patch (totally untested and not even built) at the
>> end that converts to individual allocations, what do you think?
>
> There's no particular reason other than you and Seth recommending
> reducing the number of kmallocs for the first iteration. I'm totally
> fine with your changes if you think it'll work better in practice.
>
We it depends what you are doing, profile unpack is slow (relatively),
done in the context of task requesting the profile load etc. So doing
lots of unpacks isn't an issue.
What is an issue for the kernel at this point is the size of the
allocation. Any allocation over a page size has a chance of failing
on longer running systems. vmalloc gets around this by rearranging
pages in virtual memory if needed, but it is relatively slow and
not what you want for all allocations. The is a high bred function
kvmalloc which will use kmalloc when in can and flip over to vmalloc
when needed. It just needs to be paired with kvfree.
I am fine leaving the data as a single big blob but we definitely
need to move to kvmalloc for it.
>
>
>>> fail: - if (profile) + /* don't worry about profile->data,
>>> aa_free_profile should free it */ + if (profile) { +
>>> kzfree(profile->blob);
>> free_profile also frees profile->blob so either don't worry about
>> it here either or assign NULL to it, so we don't get a double free
>
> Agreed, thanks for catching that. I only looked at your patch briefly,
> but I think removing the kzfree at this point is better than removing
> it in aa_free_profile().
>
oh yes it needs to stay in aa_free_profile, but you could get away with
freeing here, and then doing profile->blob = NULL, so that free_profile
with ignore it
More information about the AppArmor
mailing list