Fwd: Re: DNS versus NAT ?
Jeff Kletsky
unbound at allycomm.com
Sat Jun 15 21:26:01 UTC 2019
On 6/15/19 1:56 PM, Ronald F. Guilmette via Unbound-users wrote:
> In message <20190615154602.3BD08201591C0B at ary.qy>,
> John Levine <johnl at taugh.com> wrote:
>
>> In article <8edb08ac-5f86-04b7-7b7e-8bf1eb25386c at gmail.com> you write:
>>> You may not need a "cloudish sort of place." It really depends your user
>>> count. A residence or small business doesn't generate that many "new"
>>> domain queries in 24 hours.
>> I'm pretty sure that when Ron said 64K outstanding queries, he meant
>> it. It's not just family members looking at Facebook.
> Well, to be clear, I never said 64+K queries all "outstanding" (and as
> yet unanswered) at any given moment. In fact, my hope and believe is
> that my worst case for simultaneously open/pending queries would likely
> be smaller than that. However I have been known to do a million or
> so DNS queries in an afternoon, and depending on how the SOHO router
> maintains it's table of connection-ish 4-tuples, doing that from behind
> some such router might indeed cause the thing to catch fire, metaphorically
> speaking.
>
> A lot of this depends on one's defintition of an "outstanding" DNS query
> also. If I do a million queries, to all sorts of things scattered all
> over the place... which is something that I do routinely... then it's very
> typical that as much as a quarter or more of thoes DNS queries will go
> entirely unanswered due to dead delegations. So if I send out 1 million
> queries over the space of, say, 3 hours, at the end of those 3 hours we
> mighy say that 250,000 queries are still "outstanding" because no response
> whatsoever has been received. So obviously, if the router is going to
> cling onto and keep each 4-tuple that is associated with each of those, for
> hours on end, and not do garbage collection early and often, then that's
> going to be a problem.
>
> To bring this back, at least vaguely, to being on topic, what is Unbound's
> approach to this problem? Has anyone tried to shove a few gazllion queries
> through it over a very short period of time, just to see if it could be
> made to explode? If not, doing so might be entertaining.
>
> (Memories of various videos I've seen which involve the combination of
> Mentos and Diet Coke are springing immediately to mind. :-)
>
>
> Regards,
> rfg
It's pretty much a question of kernel tuning, be it direct queries
or those going through NAT.
Many SOHO all-in-one routers are running Linux kernels.
30-60 seconds is typical. The sysctls involved might be
(from a SOHO all-in-one router with 128 MB RAM, Linux 4.19)
net.netfilter.nf_conntrack_udp_timeout = 60
net.nf_conntrack_max = 16384
depending on kernel, version, and how they manage their
stateful firewall (conntrack, being "free", is common).
16384 / 60 ~ 200 per second, with some headroom left.
> 1 million queries over [...] 3 hours < 100 per second
Of course, if you're really running that kind of DNS volume
and the TCP traffic that would usually go with it, you're
probably not using a SOHO-grade SoC for your router with
only 128 MB of RAM.
Jeff
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.nlnetlabs.nl/pipermail/unbound-users/attachments/20190615/f52059de/attachment.htm>
More information about the Unbound-users
mailing list