[nsd-users] workload for testing a large number of zones?
wouter at nlnetlabs.nl
Tue Oct 8 08:58:23 UTC 2013
-----BEGIN PGP SIGNED MESSAGE-----
On 10/08/2013 01:44 AM, Will Pressly wrote:
> Hi All,
> I apologize for the long note, but I would really appreciate the
> list's expertise on this matter.
> I am really interested in understanding the practical limits of how
> many zones NSD 4 can handle. When I mean large numbers of zones, I
> mean on the orders of millions. From all of the literature
> concerning NSD4 testing, I have only seen references to hundreds of
> thousands of zones.
Experience is not what I can get you, but here's theoretical results.
This is for 5M zones, all very small, no DNSSEC, configured as master
zones, the nsd-mem tool (64bit machine) was used to estimate memory
because it can give an estimate of memory without actually using the
memory (unless it is one big zone).
27.801.003.701 ram usage (excl space for buffers)
13.760.000.000 disk usage (excl 12% space claimed for growth)
41.561.003.701 data and big mmap
30.094.337.034 data and partial mmap
The zonefiles for this test had SOA, 2xNS, one MX and an A record for
'www'. NSEC would be bigger, but NSEC3 would be disproportionally
bigger, if you can choose your DNSSEC, NSEC will be nicer to your memory
usage. nsd-mem parses about 88k-246k zones/minute on startup (from
text zonefile, and writes them to nsd.db).
For a slightly bigger zone, that you mention, with 30 records in the
zone, mostly A, with 1 AAAA, 1 MX, 3 CNAMEs, here are some memory
estimate results for that:
73.162.003.701 ram usage (excl space for buffers)
105.280.000.000 disk usage (excl 12% space claimed for growth)
178.442.003.701 data and big mmap
90.708.670.367 data and partial mmap
So, 30-40G for very small zone, 90-180G for 30name zone. Likely
scales with the number of zones, so for 1 million zones that is 10G
(tiny zone) and 20-40G (bigger zone) of memory.
I do not really have any feedback on what zones are normal, or on other
operational realities, but I thought a look at memory usage might make
you pick a machine that can do it :-)
The nsd.conf option zonefiles-check: no disables filetime checks, with
could save you time on startup and "kill -HUP". Reread a zone's
zonefile specifically by name with nsd-control reload <name> in this case.
> This leads me to some questions: 1. Is there a repository of a
> large number of zone files used in testing, large-scale, high
> performance authoritative-only name servers such as
> NSD/Knot/Yaddifa? If so, can someone offer a pointer to it,
> please? 2. Assuming there exists no such repository, if I have to
> generate 5 million random zones files, what are your thoughts on
> the distribution of size of zone files -- I guess in terms of
> record count for the most part. Also, the composition of the zone
> files -- meaning mostly A records, a few MX here and there, a few
> CNAMEs here and there, etc. EG: a normal distribution with a mean
> of 30 records and appropriate mins and maxes... including 90% A
> records, etc... 3. For this analysis, I want to leave DNSSEC out. I
> know DNSSEC can really bloat zone files -- so is this a horribly
> bad assumption? (Another assumption is that I do not have to mess
> with axfr at all.) 4. Where do we think NSD will break down, in
> terms of cardinality of the set of zone files? The internal data
> structures (rbtrees/radix tree) seem like they will hold
> performance-wise at large scale. Memory seems like it might be a
> concern. Thoughts?
> Thanks, Will Pressly
> _______________________________________________ nsd-users mailing
> list nsd-users at NLnetLabs.nl
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----
More information about the nsd-users