ec2 networking containers

Clint Byrum clint at ubuntu.com
Tue May 15 21:38:07 UTC 2012


Excerpts from Kapil Thangavelu's message of Fri May 11 10:59:40 -0700 2012:
> Hi Folks,
> 
> noodling on networking containers in ec2 (only tcp, udp, icmp). soren 
> recommended some things to investigate at the openstack conf.
> 
> http://www.tinc-vpn.org/
> http://www.ntop.org/products/n2n/
> 
> there's a nice presentation on using tinc on ec2 with some mods sans crypto,
> http://www.dtmf.com/ec2_vpn_fosdem2011.pdf

Thanks for sending Kapil. Tinc definitely looks like it is focused on
the simplest solution to this fairly complex problem.

I still feel very strongly that EC2 is properly segmented for any
real world workload. The m1.small is about as powerful as a netbook,
and anything that one cannot match dollars to CPU time can be run on
a t1.micro. I don't really think juju should be focused on smaller use
cases than what a t1.micro can solve.

I also don't expect that other cloud providers will deviate much from
the mix that EC2 has established, as they seem to have spent the last 5+
years getting this right and responding to real customer needs. I'm sure
RAX will have their own definition of what a "CPU" is and so will HP.
However, ultimately, people will ask for partial CPU instances, and juju
should expect that much from them, or expect that users will migrate
to EC2.

For bare metal, juju should be focused on either HPC cases, or deploying
virtualization solutions.

For HPC, putting two services on a node makes no sense because the
entire node should be 100% taxed on the HPC service. This is useful in
bare metal because the virtualization overhead implied by "the cloud"
may be enough to justify dedicated servers. Of course, one might argue
for OpenStack+LXC at that point.

For OpenStack, I see a real need to be able to combine mysql+rabbit+cloud
controller because they will eat up a real server each. This won't matter
in real deployments, but users seem to report having 2 - 5 machines
for test clouds, not 9. This also is probably the case for test HPC
deployments too.

Given the type of network that one can expect with MaaS vs. EC2, I
think we can reasonably expect that we can just configure containers in
bridged mode and use them without an extra virtual network for them to
ride on. This is where I would suggest juju devote resources to before
we drag a whole virtual network into play.

Also for the "I just want to cleanup the charm in the 1:1 machine:service
deployment" case, I think we should take a good look at using
chroot. Upstart already supports it, so we can install and run all the
upstart jobs we need. Schroot already is able to kill all processes and
cleanup all files in a schroot. Since we wouldn't be isolating the chroot
from other services, just using it for cleanup purposes, this would be
a very simple way to get "containers" without the network complexity.



More information about the Juju mailing list