[apparmor] [patch] v3 - parser: add basic support for parallel compiles and loads
John Johansen
john.johansen at canonical.com
Fri Dec 11 21:17:28 UTC 2015
<< snip >>
>>>
>>> I've just started playing around with this, so not a full review. I did
>>> notice one thing, that was discussed a bit in the previous thread:
>>>
>>>>>> +#define work_spawn(WORK, RESULT) \
>>>>>> +do { \
>>>>>> + if (jobs_max == 1) { \
>>>>>> + /* no parallel work so avoid fork() overhead */ \
>>>>>> + RESULT(WORK); \
>>>>>> + break; \
>>>>>> + } \
>>>>>
>>>>> I'm not sure I like this optimization; even though it looks correct, I
>>>>> fear it would complicate future maintenance efforts. (If we want to use
>>>>> work_spawn to work on a single item, then call work_sync_one(), the whole
>>>>> thing will exit because of an error return from wait().) If it really
>>>>> saves a ton of time maybe it's worth the extra maintenance effort but I
>>>>> think even the tiniest of environments will still benefit from two jobs.
>>>>>
>>>> Do you really think, this 1 little condition tucked away is that much more
>>>> maintenance? The way it is set up, it is completely transparent to the
>>>> rest of the code. That is we can delete it and the rest of the code doesn't
>>>> need to change.
>>>>
>>>> Notice this isn't about working on a single item, like work_sync_one()
>>>> would be for, this is you can have 1 job running (jobs_max == 1), we
>>>> still may be processing 50 profiles, they are just all done sequentially.
>>>>
>>>> So atm if you have jobs_max > 1 and only pass it a single profile the
>>>> parser does fork an process that job in a separate forked child.
>>>
>>> Note that there is one behavioral difference by doing so: if a specific
>>> policy compilation fails in such a way that exit() is called, in the
>>> -j1 case it stops there, but in the -jN where N > 1 case it continues
>>> on attempting to load additional policies. That's a behavioral change
>>> that I don't think users would expect from switching the number of
>>> threads used.
>>>
>> is this different than the current shell based parallel compile using
>> xargs -P, or any other parallel based situation?
>>
>> I can't stop compiles/loads that are happening in parallel. The best we
>> can do is on error stop compiling/loading any new jobs. Is this what
>> you want
>>
>>
> just to add, currently the handle_work_result() fn does exit on error if
> the abort_on_error is set. This causes any currently running jobs to
> become orphans to be collected by init, and stops new jobs from being
> spawned, with the error being returned being the first error encountered
>
>
so just one further note. It looks like abort_on_error is NOT set by
default. This means the current and old behavior (pre this patch) was
to ignore load errors, and continue processing.
So the current behavior is not any different
the question becomes if whether we should
1. switch to abort_on_error by default
2. whether the parser should not exit, and instead just collect all
pending jobs before exiting with the error
More information about the AppArmor
mailing list