[nsd-users] workload for testing a large number of zones?
will at edgecast.com
Mon Oct 7 23:44:09 UTC 2013
I apologize for the long note, but I would really appreciate the list's
expertise on this matter.
I am really interested in understanding the practical limits of how many
zones NSD 4 can handle. When I mean large numbers of zones, I mean on the
orders of millions. From all of the literature concerning NSD4 testing, I
have only seen references to hundreds of thousands of zones.
This leads me to some questions:
1. Is there a repository of a large number of zone files used in testing,
large-scale, high performance authoritative-only name servers such as
NSD/Knot/Yaddifa? If so, can someone offer a pointer to it, please?
2. Assuming there exists no such repository, if I have to generate 5
million random zones files, what are your thoughts on the distribution of
size of zone files -- I guess in terms of record count for the most part.
Also, the composition of the zone files -- meaning mostly A records, a few
MX here and there, a few CNAMEs here and there, etc. EG: a normal
distribution with a mean of 30 records and appropriate mins and maxes...
including 90% A records, etc...
3. For this analysis, I want to leave DNSSEC out. I know DNSSEC can really
bloat zone files -- so is this a horribly bad assumption? (Another
assumption is that I do not have to mess with axfr at all.)
4. Where do we think NSD will break down, in terms of cardinality of the
set of zone files? The internal data structures (rbtrees/radix tree) seem
like they will hold performance-wise at large scale. Memory seems like it
might be a concern. Thoughts?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nsd-users