[apparmor] [PATCH] parser: add basic support for parallel compiles and loads

John Johansen john.johansen at canonical.com
Sat Nov 21 00:35:20 UTC 2015


On 11/20/2015 04:01 PM, Seth Arnold wrote:
> On Sat, Oct 17, 2015 at 03:49:17PM -0700, John Johansen wrote:
>> This adds a basic support for parallel compiles. It uses a fork()/wait
>> @@ -286,8 +324,13 @@ static int process_arg(int c, char *optarg)
>>  		option = OPTION_ADD;
>>  		break;
>>  	case 'd':
>> -		debug++;
>> -		skip_read_cache = 1;
>> +		if (!optarg) {
>> +			debug++;
>> +			skip_read_cache = 1;
>> +		} else if (strcmp(optarg, "jobs") == 0 ||
>> +			   strcmp(optarg, "j") == 0) {
>> +			debug_jobs = true;
>> +		}
>>  		break;
> 
> This could use an 'else' branch to report unknown argument optarg, so that
> e.g. "-d job" or "-d foo" will exit with an error message rather than
> continue on silently.
> 
sure

>> +#define work_spawn(WORK, RESULT)					\
>> +do {									\
>> +	if (jobs_max == 1) {						\
>> +		/* no parallel work so avoid fork() overhead */		\
>> +		RESULT(WORK);						\
>> +		break;							\
>> +	}								\
> 
> I'm not sure I like this optimization; even though it looks correct, I
> fear it would complicate future maintenance efforts. (If we want to use
> work_spawn to work on a single item, then call work_sync_one(), the whole
> thing will exit because of an error return from wait().) If it really
> saves a ton of time maybe it's worth the extra maintenance effort but I
> think even the tiniest of environments will still benefit from two jobs.
> 
Do you really think, this 1 little condition tucked away is that much more
maintenance? The way it is set up, it is completely transparent to the
rest of the code.  That is we can delete it and the rest of the code doesn't
need to change.

Notice this isn't about working on a single item, like work_sync_one()
would be for, this is you can have 1 job running (jobs_max == 1), we
still may be processing 50 profiles, they are just all done sequentially.

So atm if you have jobs_max > 1 and only pass it a single profile the
parser does fork an process that job in a separate forked child.

As for the overhead, its not as much as an exec, but fork does have
overhead. We are not doing a prefork model atm so I think for small
tight platforms (phones) we keep the over head to a minim where we
can.

Note: I have considered adding a test for number of args left to process
== 1 to switch jobs_max to 1 to avoid the overhead but have not done
that optimization yet.

>> +/* sadly C forces us to do this with exit, long_jump or returning error
>> + * form work_spawn and work_sync. We could throw a C++ exception, is it
>> + * worth doing it to avoid the exit here.
> 
> I think we ought to avoid C++ exceptions entirely.
> 
I'm not so convinced, especially since we are already using C++ exceptions
else where, but I'll stick with what we have for now

>> + *
>> + * atm not all resources maybe cleanedup at exit
>> + */
>> +int last_error = 0;
>> +void handle_work_result(int retval)
>> +{
>> +	if (retval) {
>> +		last_error = retval;
>> +		if (abort_on_error)
>> +			exit(last_error);
>> +	}
>> +}
>> +
>> +static void setup_parallel_compile(void)
>> +{
>> +	/* jobs_count and paralell_max set by default, config or args */
>> +	long n = sysconf(_SC_NPROCESSORS_ONLN);
>> +	if (jobs_count == JOBS_AUTO)
>> +		jobs_count = n;
>> +	if (jobs_max == JOBS_AUTO)
>> +		jobs_max = n;
>> +	/* jobs_max will be used internally */
>> +	jobs_max = min(jobs_count, jobs_max);
> 
> I think we should also have jobs_max = max(1, jobs_max); to ensure that
> negative numbers don't sneak through somewhere.
> 
hrmm I think I would prefer a
if (jobs_max < 1)
  error ....


>> +	njobs = 0;
>> +	if (debug_jobs)
>> +		fprintf(stderr, "MAX jobs: %ld\n", jobs_max);
>> +}
>> +
> 
> Thanks
> 
> 
> 




More information about the AppArmor mailing list