[apparmor] [patch] regression tests: make sysctl(2) test a bit more resiliant

Steve Beattie steve at nxnw.org
Mon Aug 10 21:37:24 UTC 2015


On Thu, Jul 23, 2015 at 02:18:38AM -0700, Seth Arnold wrote:
> On Thu, Jul 23, 2015 at 01:45:35AM -0700, Steve Beattie wrote:
> >  int main(int argc, char *argv[])
> >  {
> >  	int save_max_threads, new_max_threads, read_new_max_threads;
> > -	int name[] = {CTL_KERN, KERN_MAX_THREADS};
> >  	int readonly = 0;
> >  	
> > +	if ((argc > 1) && strcmp(argv[1],"ro") == 0)
> >  		readonly = 1;
> >  
> > +	if (read_max_threads(&save_max_threads) != 0)
> >  		return 1;
> >  
> >  	/* printf("Kernel max threads (saved) is %d\n", save_max_threads); */
> >  
> > @@ -41,36 +64,39 @@ int main(int argc, char *argv[])
> >  
> >  	new_max_threads = save_max_threads + 1024;
> >  
> > +	if (write_max_threads(new_max_threads) != 0)
> >  		return 1;
> >  
> > +	if (read_max_threads(&read_new_max_threads) != 0)
> >  		return 1;
> >  
> 
> At this point, a 'return 1' leaves the system with _probably_ higher max
> threads than we started with. Is there any way for this to fail if we've
> made it this far?

Probably not, though I'd rather not rely on knowing whether the
kernel paths can fail, based on the current implementation, however
you choose to define "current".

I guess we could have a fail path that tried to write back the
original value if the read() here and below fails, but I'm not
entirely convinced that if the read() failed, the following write()
would succeed.

> >  	/* printf("Kernel max threads (new) is %d\n", read_new_max_threads); */
> >  
> >  	if (read_new_max_threads != new_max_threads) {
> > +		/* the kernel possibly rejected our updated max threads
> > +		 * as being too large; try decreasing max threads. */
> > +
> > +		new_max_threads = save_max_threads - 1024;
> > +
> > +		if (write_max_threads(new_max_threads) != 0)
> > +			return 1;
> > +
> > +		if (read_max_threads(&read_new_max_threads) != 0)
> > +			return 1;
> 
> .. same here, but probably fewer max threads.
> 
> Is there any danger of max threads being between 0 and 1023 to start? It
> seems unlikely, and this is test code, but I figured I'd ask.

According to the upstream kernel documentation
https://www.kernel.org/doc/Documentation/sysctl/kernel.txt , it gets
set at a value such that the memory consumed if the maximum number
of threads is created, no more than 1/8th of the available RAM pages
would be consumed. According to that page, it's allowed to be set as
small as 20.

So it's possible, I suppose, but probably unlikely. For example, my
test x86_64 VM configured with 768M of RAM running a 4.1ish kernel has
a default value of 5662; a similarly configured ia32 vm has a default
of 11414. I would expect hosts/VMs with smaller RAM configurations
to generally be running 32 bit kernels. Again, we could try to be clever
in falling back, attempting to try ever smaller decreases to see if any
succeed, but I'm not sure how much we gain from that.

-- 
Steve Beattie
<sbeattie at ubuntu.com>
http://NxNW.org/~steve/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <https://lists.ubuntu.com/archives/apparmor/attachments/20150810/35e8e670/attachment.pgp>


More information about the AppArmor mailing list