[nsd-users] reloading NSD zone configuration

Greg A. Woods woods at planix.ca
Thu May 21 04:26:06 UTC 2009


At Wed, 29 Apr 2009 10:43:51 +0200, Robert Martin-Legène <robert at dk-hostmaster.dk> wrote:
Subject: Re: [nsd-users] reloading NSD zone configuration
> 
> While TTL plays a role in some situations, I don't think this is one of
> them. The Internet motto is "do it now". A reasonable question could be
> "why do the user have to wait?" (today's answer is: because the computer
> wants you to).

Well, for sure the TTL of DNS RRs must, by definition, play a part in
every change to, or deletion of, any existing RR in the DNS.  That's
just a fact of the protocol and most modern DNS cache implementations.

This includes changes to NS records -- i.e. including re-delegations.

While per-record TTLs don't affect brand new zones un-delegated zones,
zone transfer delays (or their equivalent) will affect the time it takes
for all authoritative servers to be ready to serve a new zone.  You
cannot give a customer a green light to rely upon their new zone until:
(a) it is loaded in all of the authoritative servers it has been
delegated to; and (b) all the parent zone nameservers are successfully
handing out all of the new delegating NS records for the domain.  As I
believe I said before, if you're following the full spirit of RFC 2182
then as part of that you must allow for the potential that the initial
primary server may not always be able to reach all secondary servers in
near real time, and this goes for both the parent's servers, as well as
the delegated servers.

May I humbly suggest that if you are making promises you cannot possibly
keep because of protocol design and real-world limitations then perhaps
you should more urgently be attempting to change your customer's
expectations such that they will be more in line with the reality of the
DNS protocol design and the other real-world limitations of the global
Internet.  Education and awareness is always a good thing, even for
customers!  :-)


> What we're seeing is, that people are working on a task to
> register/redelegate a number of domain names and don't need the work
> flow disturbed unnecessarily.
> 
> Some TLD's require the DNS to be set up correctly before accepting
> requests for new domains or for redelegation of existing ones. If the
> user doesn't have the domain name ready on the new servers when the
> registry receives the request, they will reject it. So the user is often
> only waiting for their DNS provider to created a ~10 RR-strong zone on a
> few servers.

I don't really understand the problem.  I.e. I understand what you're
saying, but I don't agree that it's a problem that needs fixing in
anything remotely resembling making NSD do dynamic configuration changes
while still answering queries.

If you're doing both jobs, i.e. hosting DNS and registering domains,
then it should be trivial to schedule rolling batch jobs that will do
the right things, in the right order, for all pending registrations.
S.M.O.P.

If you're only doing one of the jobs then it's either not a problem in
the first place (if you're just hosting the DNS -- assuming you set the
user's expectations appropriately), or it's again just a SMOP to ensure
things are done in the right order (if you're the registrar waiting for
a]l, or at least one reachable, delegated servers to answer
authoritatively before you delegate the new domain in the parent zone).

If you give your users tools providing reports and feedback then they
can manage their work flow in step with the realities of the protocol
requirements and other real-world limitations which may affect the setup
and operation of their domains.


> A name server handling hundreds or thousands of requests per second,
> surely can create 10 RR's and a zone-cut "on the fly"?

As I said before, if you are following the full spirit of RFC 2182 then
it really _really_ will not matter if even minutes worth of requests get
dropped on the floor by one authoritative server (or indeed each in
rolling succession if you happen to control them all), even if that
server handles millions of domains.  The DNS was designed with this very
certain eventuality in mind.  Indeed it is good to reduce the outage
time as much as is possible, but it is also good to ensure there are
more than two geographically and topologically separated authoritative
nameservers for each and every domain such that at least one will be
reachable and running from every possible user's perspective.  Really
large DNS providers can probably afford through economies of scale to
also use many other networking tricks to make things even more reliable
and redundant in the face of very high volumes of domains and queries,
but none of this requires changing NSD in any way for their use.  Indeed
changing NSD wouldn't solve any of the real problems here anyway.
Making an authoritative DNS server capable of dynamic changes to its
configuration is really just papering over the wrong issues.

-- 
						Greg A. Woods

+1 416 218-0098                VE3TCP          RoboHack <woods at robohack.ca>
Planix, Inc. <woods at planix.com>      Secrets of the Weird <woods at weird.com>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 186 bytes
Desc: not available
URL: <http://lists.nlnetlabs.nl/pipermail/nsd-users/attachments/20090521/251f9e3d/attachment.bin>


More information about the nsd-users mailing list