[apparmor] [PATCH] parser: add basic support for parallel compiles and loads

John Johansen john.johansen at canonical.com
Sun Nov 29 05:02:18 UTC 2015


On 11/28/2015 01:54 PM, Christian Boltz wrote:
> Hello,
> 
> Am Samstag, 28. November 2015 schrieb John Johansen:
>> v2
>> switch to default --jobs=auto
>> check and bail on bad debug argument
>> update comments on work_sync_one()
>> check for an invalid number of maximum jobs()
>> put an upper limit on the maximum number of jobs to 8*# of cpus
> 
> 
>> +static void setup_parallel_compile(void)
>> +{
>> +	/* jobs_count and paralell_max set by default, config or args */
>> +	long n = sysconf(_SC_NPROCESSORS_ONLN);
>> +	if (jobs_count == JOBS_AUTO)
>> +		jobs_count = n;
>> +	if (jobs_max == JOBS_AUTO)
>> +		jobs_max = n;
>> +	/* jobs_max will be used internally */
>> +	jobs_max = min(jobs_count, jobs_max);
>> +	if (jobs_max > 8*n) {
>> +		PERROR("%s: Invalid maximum number of jobs '%ld' > 8 * # of cpus",
>> +		       progname, jobs_max);
>> +		exit(1);
>> +	}
> 
> So the parser will error out if a too big job number is given _and_ if 
> there are enough profiles to load (otherwise the number gets reduced to 
> the number of available profiles/jobs). 

No only if the max number of jobs is being forced and is much larger than
the available cpus

> In other words: I can use -j 10000 for quite a while, but it will break 
> after I add another profile to my system.
> 
No.

Max jobs only controls the number of parallel processes at any given
moment, the number of profiles have nothing to do with it. Basically
the parser we spin off new jobs up to max jobs, and then wait for
one of the jobs to complete before spinning off the next job.

If the number of jobs is too high, the processes are competing with
each other and builds will actually slow down. If the number gets
high enough, the parser can DOS the system consuming all resources.

This check is well above the range of values I would recommend (some
where between 1-2x the number of cpus. More jobs can help with smaller
profiles that have a larger percentage of their compile being I/O
bound. But if a profile compile takes awhile its dominantly backend
which is pure cpu and memory, in which case anything more than
using the # of cpus, will slow the compile down.

> This sounds like surprising behaviour to me (especially because it 
> depends on the number of profiles to load), and I'm not sure if it's
> critical enough to exit().
> 
Again, its about resource usage during the compile, not the number
of profiles. You can process 1,001 profiles with max_jobs == 4
Its just that only 4 will be processed at any given time. As soon
as 1 profile finishes the next starts up, so that the parser is
constantly doing that number of parallel compiles until all the
profiles are done.


> I'd prefer to reduce jobs_max to 8*n and print a warning.
> 
> In C-like Pseudocode:
> 
> +	if (jobs_max > 8*n) {
> +		WARN("%s: Invalid maximum number of jobs '%ld' > 8 * # of cpus, reducing to %ld",
> +		       progname, jobs_max, n);
> +		jobs_max = 8*n;
> +	}
> 
> 
meh, I guess we could do that, but again max_jobs 8 * # of cpus is going
to slow things down.





More information about the AppArmor mailing list