<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
The ip-ratelimit is scaled to queries per second and typical values
(on that per second scale) are probably hard to come by as depending
on the design/size/scalability of the lan and the unbound deployment
scenario lan and/or wan. Also the DNS tunnel can vary in its
design/purpose - a DNS tunnel for web/mail is likely to generate a
larger amount of DNS queries than a DNS tunnel used for command and
control messages and/or data exfiltration.<br>
<br>
Thus been wondering whether an ip-ratelimit-below-domain
implementation would be more useful considering that a client would
probably not query third-level domains legitimately more than say 5
times per hour, e.g. something like<br>
<br>
ip-ratelimit-below-domain: *.* 5ph<br>
<br>
Wildcard syntax should be supported, which is not clear to me
whether it currently is. If not one would have to figure out the
offending domain first. <br>
<br>
<div class="moz-cite-prefix">On 28.11.2018 12:04, via Unbound-users
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABPjrMUgiOL-Neq7VgZB_iM5K7PVoN7qDEWisGyCEZwe1H9_Qw@mail.gmail.com">
<pre class="moz-quote-pre" wrap="">Hi,
I think global IP-ratelimit will fit nicely.
Do you have information about typical values used in networks
(recommendiations) ? I will do my own research, but it would be great
to have some reference.
I am also thinking about dropping large packets (since they are used
only for tunnel purposes)
Thanks a lot
pon., 26.11.2018, 10:28: Wouter Wijngaards via Unbound-users
<a class="moz-txt-link-rfc2396E" href="mailto:unbound-users@nlnetlabs.nl"><unbound-users@nlnetlabs.nl></a> napisał(a):
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">
Hi,
Unbound has ratelimit options for both user query count (ip-ratelimit)
and number of iterative queries under a domain beneath a zone
(ratelimit-below-domain and ratelimit-for-domain). The first is per-IP
address, the second based on domain name. Could set a global number, or
specify the culprit's client-IP or the tunnel service domain name.
Best regards, Wouter
On 11/23/18 7:44 PM, ѽ҉ᶬḳ℠ via Unbound-users wrote:
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">
On 23.11.2018 18:36, via Unbound-users wrote:
however, those concerns are in a way off topic for this mailing list,
so allow me to ask a more direct unbound question. why does the cache
bloat? you're using LRU replacement, and these records are never
accessed. therefore while they can push other more vital things out of
the cache, decreasing cache hit rate, they should be primary targets
for replacement whenever other data is looking for a place to land. i
understand that this cache churn has a cost, in bandwidth and in CPU,
but not in memory -- once the cache reaches its working set maximum,
it ought to grow no further. what could i be misunderstanding about this?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Your understanding is correct I trust and bloating been a misdirection
indeed. Referring to initial post: "Since I am observing a lot of DNS
Tunnel “users” , the cache started to store totally useless records of
type TXT and NULL."
And in this context those queries, which to my understanding can be of
high frequency in a DNS tunnel (depending on its purpose), are replacing
legitimate records once the max. cache size is reached. And as you
stated churning the cache comes at a cost. I am wondering what
legitimate purpose it is for the resolver not only to cache NULL records
but even serve them to clients other than perhaps some corporate
edge/niche cases considering that at least rfc1035 does not specify a
legitimate purpose for NULL records (as of today).
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">a second unbound-related topic is cache management itself. it is
unusual for the splay between a name and its descendants to number in
the millions. it happens for arpa, and popular TLD's such as COM, NET,
ORG, and DE. as a cache management strategy, consider whether to more
rapidly discard descendants of a high splay apex, unless they are
accessed at least once. and in defiance my fear-related argument
above, when the cache is full beyond some threshold like 90%, consider
using the "splay is high, subsequent access of descendants is zero" as
a signal to (a) not cache new descendant data, and (b) syslog it.
there isn't a dnstap message-tag for this condition yet, but there
ought to be. splay is easy to keep track of unless your cache is flat.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
After reading it I thought that something like Rate-limiting Fetches Per
Zone as implemented in BIND would be helpful to have in unbound too:
"which defines the maximum number of simultaneous iterative queries to
any one domain that the server will permit before blocking new queries
for data in or beneath that zone."
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
</pre>
</blockquote>
</blockquote>
<br>
</body>
</html>