[Unbound-users] cache/auth architecture question

Michael Tokarev mjt at tls.msk.ru
Tue May 19 11:24:02 UTC 2009


I'm trying to convert existing BIND infrastructure to
nsd/unbound pair, and am facing some.. difficulties
with rudimentary auth-zone support in unbound and
complete lack of replication in it.

Here's why.

While unbound is used for simple cases like single
home machine (definitely recursive-only, no local
data at all), or for large caches facing clients
of an ISP (where local data is irrelevant), it all
works very well.

But I've a situation where I've a mixed case --
recursive behaviour and local data.

Our company has several divisions which are located
at different places.  Each has its own subdomain for
local access, reverse zones and so on.  They're
replicated to/from central servers as appropriate.

This replication works quite well between NSD servers,
as it were before with BIND.  Each server knows a set
of other servers and a set of zones it has locally and
pulls from other places.

Note the data in question is purely local zones, used
internally only, like name of local mail server or
database server and so on.

Now I'm moving on to the recursive nameservers setup.
And it looks like I have to pair each NSD that stores
local data with unbound which is configured to query
local NSD for every zone that NSD knows about.  I.e,
100% duplicate configuration -- repeating in unbound
the same as already configured in NSD.

I tried keeping that stuff as local-data statements
in unbound config files, but since it has to be
replicated into several places (as more than one
place acceses the same server), lack of replication
mechanism does not help here to keep them in-sync.
Also lack of CNAME support in unbound is not good
for that, since many names pointing to other
division's servers are CNAMEs to their zones.

So I tried NSD+unbound pair instead, and now am
facing almost the same 'keep 'em in-sync' thing
(but now it's less pressing) -- have to configure
list of "local" zones in two places on each node.
And the whole config becomes... clumsy.

Note that in this case, NSD does very very little
(the "amount" of RRs it manages), but it does

What's the best way to achieve somethig like that
using the "split brain" such as nsd/unbound?

The goal is to have as independent nodes as possible,
so that each can work in case of various network
failures, at least locally.  Plus redundancy.

Maybe some other "mass-replicating" tools like

What I liked about BIND and NSD replication is --
no matter if the network is up or down, if any
remote node is reachable or not at the moment,
the replication will happen when they'll be able
to talk with each other (And having dead link
isn't that uncommon.  And even if it IS uncommon,
when it actually fails it's very unfortunate to
sit here and wait when it becomes working again,
instead of letting the computers to do their work).



More information about the Unbound-users mailing list