[Maas-devel] Replacing Apache

Gavin Panella gavin.panella at canonical.com
Thu Nov 27 16:42:35 UTC 2014


On 27 November 2014 at 16:17, Christian Robottom Reis wrote:
> On Thu, Nov 27, 2014 at 03:56:17PM +0000, Gavin Panella wrote:
>> >     - There is a bug filed by IS asking us to reduce the number of ports
>> >       the region listens on. Could we bing all region processes to the
>> >       same socket using SO_REUSEPORT to avoid this issue?
>> >
>> >         http://lwn.net/Articles/542629/
>> >
>> >       We'd need to study the semantics, but this seems easier than
>> >       inverting it so the region connects to the cluster.
>>
>> Each clusterd needs to connect to each regiond. Having multiple regionds
>> listening (for RPC) on the same port makes it hit-and-miss for a
>> clusterd to make all of the connections it needs. It could just keep
>> trying until it has them all, but that's a bit sucky.
>
> Yeah, I had forgotten about that aspect of the architecture, which is
> tied to having multiple active region processes (possibly spread across
> multiple machines). Is there a reason why we need that, and couldn't
> move to a passive failover model?

For PostgreSQL I agree that passive failover is probably the best
approach right now. For the regionds and clusterds I think we should
always aim for all-active. It helps keep us honest by designing for
failure; failure is not a special case.

>
>> "Hello! Is that regiond-B?"
>>
>> "No, this is regiond-A, and I'm already talking to you on line 1."
>>
>> <click>
>>
>> "Hello! Is that regiond-B?"
>>
>> "No, this is regiond-A, and I just spoke to you."
>>
>> <click>
>>
>> "Hello? ..."
>>
>> We could turn this around and make each regiond initiate connections to
>> each clusterd, and that would address the problem... as long as we only
>> have one clusterd on each cluster controller.
>
> Is there a realistic scenario where we will have more than one?

H.A. We may have two clusterds on a single machine, or one on each of
two machines, or multiples on multiple machines.




More information about the Maas-devel mailing list