[Bug 1879798] Re: replacing designate units causes issues previously created zones
Jorge Niedbalski
1879798 at bugs.launchpad.net
Thu May 28 19:23:32 UTC 2020
** Changed in: charm-designate-bind
Status: Confirmed => Invalid
** Summary changed:
- replacing designate units causes issues previously created zones
+ designate-manage pool update doesn't reflects targets master dns servers into zones.
** Description changed:
+ [Environment]
+
+ Ubuntu + Ussuri
+
+ [Description]
+
+ If running designate-manage pool update with new targets, those targets
+ gets properly updated in the pool target masters list, but those aren't
+ reflected into the zones that belongs to this pool, therefore, the masters
+ associated to that zones aren't updated causing failures as the expressed
+ in the Further Information section.
+
+ designate-manager pool update should offer an option to update the zones
+ associated to the pools with the new target masters and be able to apply
+ these changes into existing zones.
+
+ For the case of the bind9 backend the current workaround is to manually
+ run the rndc modzone command with the new masters, but that's not suitable
+ for large installations with multiple zones and pools.
+
+
+ [Further information]
+
We have a designate/designate-bind setup. We migrated designate units to
different machines, replacing 3 designate units with 3 new units.
However, this caused issues with existing zones, including creating new
recordsets for these zones. The zone would result in having an ERROR
status and a CREATE action.
Looking at the designate bind units, we see that designate is attempting
to run:
'addzone $zone { type slave; masters {$new_designate_ips port 5354;};
file "slave.$zone.$hash"; };'
This addzone fails due to the zone already existing. However, we found
that the zone configuration (using 'rndc showzone $zone' from designate-
bind unit) still had the old designate ips for its masters. There are
also logs in /var/log/syslog like the following:
May 20 06:27:10 juju-c27f05-15-lxd-1 named[72648]: transfer of '$zone'
from $old_designate_ip#5354: failed to connect: host unreachable
We were able to resolve this issue by modifying the zone config on all
designate-bind units:
juju run -a designate-bind -- rndc modzone $zone '{ type slave; file
"slave.$zone.$hash"; masters { $new_designate_ip_1 port 5354;
$new_designate_ip_2 port 5354; $new_designate_ip_3 port 5354; }; };'
After modifying the zone, the recordset creations completed and resolved
almost immediately.
Would this be something the charm could do in an automated way when
masters are removed/replaced, or is there a better way of fixing the
zone configurations? For these designate migrations, we will have to
enumerate over every zone to fix their configurations.
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to designate in Ubuntu.
https://bugs.launchpad.net/bugs/1879798
Title:
designate-manage pool update doesn't reflects targets master dns
servers into zones.
Status in OpenStack Designate Charm:
Confirmed
Status in OpenStack Designate-Bind Charm:
Invalid
Status in designate package in Ubuntu:
New
Bug description:
[Environment]
Ubuntu + Ussuri
[Description]
If running designate-manage pool update with new targets, those targets
gets properly updated in the pool target masters list, but those aren't
reflected into the zones that belongs to this pool, therefore, the masters
associated to that zones aren't updated causing failures as the expressed
in the Further Information section.
designate-manager pool update should offer an option to update the zones
associated to the pools with the new target masters and be able to apply
these changes into existing zones.
For the case of the bind9 backend the current workaround is to manually
run the rndc modzone command with the new masters, but that's not suitable
for large installations with multiple zones and pools.
[Further information]
We have a designate/designate-bind setup. We migrated designate units
to different machines, replacing 3 designate units with 3 new units.
However, this caused issues with existing zones, including creating
new recordsets for these zones. The zone would result in having an
ERROR status and a CREATE action.
Looking at the designate bind units, we see that designate is
attempting to run:
'addzone $zone { type slave; masters {$new_designate_ips port 5354;};
file "slave.$zone.$hash"; };'
This addzone fails due to the zone already existing. However, we found
that the zone configuration (using 'rndc showzone $zone' from
designate-bind unit) still had the old designate ips for its masters.
There are also logs in /var/log/syslog like the following:
May 20 06:27:10 juju-c27f05-15-lxd-1 named[72648]: transfer of '$zone'
from $old_designate_ip#5354: failed to connect: host unreachable
We were able to resolve this issue by modifying the zone config on all
designate-bind units:
juju run -a designate-bind -- rndc modzone $zone '{ type slave; file
"slave.$zone.$hash"; masters { $new_designate_ip_1 port 5354;
$new_designate_ip_2 port 5354; $new_designate_ip_3 port 5354; }; };'
After modifying the zone, the recordset creations completed and
resolved almost immediately.
Would this be something the charm could do in an automated way when
masters are removed/replaced, or is there a better way of fixing the
zone configurations? For these designate migrations, we will have to
enumerate over every zone to fix their configurations.
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list