Safeguarding forward zones from requestlist buildup

Paul S. paul at 37signals.com
Thu Dec 12 07:51:20 UTC 2024


Hi Yorgos,

Apologies for not CCing the list into my last response (that's quoted at
the end).

We have some further details about what's happening. In our environment,
the heaviest user of DNS tends to be rspamd (since we run a public mail
service at hey.com) and similar automated processes.

At top of the hour email load (when we ingest the most email), the
rspamd workload to lookup RBLs is typically at its highest. On a per
node basis, we generally go upto about 250 queries/s or so.

When any RBL nameserver starts to have issues, we seem to start queueing
a lot of queries (upto 4k at a time per instance from my previous
response, it looks like) into the requestlist.

The interesting thing is, Unbound seems quite conservative in serving
SERVFAIL in these cases and appears to try for exactly 3 minutes before
giving up and expunging these entries. We sort of figured that out by
looking at the requestlist count + exceeded metrics via unbound_exporter
(prom) and logging servfail responses.

The delta appears to reliably be 3 minutes from the start of the
incident before it gives up. Here are some pool-level graphs showing
this in detail for our two datacenters - <https://cln.sh/kfzqNMZq>

So our first question is whether there are any knobs we can use to bring
this time down significantly? We would be happy to give up after 15
seconds (or even less!) to prioritize stability elsewhere.

Generally, we're looking to safeguard unbound from dropping unrelated
queries when external nameservers handling large volumes of queries have
issues.

Thank you all in advance!

> Thank you for all your suggestions. We ended up migrating to stub
> zones and enabled the cache, still mostly seeing the same actually.
>
> Looking further into monitoring, we see the requestlist balloon to
> around 3900~ entries when these incidents happen. Could that be the
> num-queries-per-thread limitation?
>
> There is pretty much no CPU or memory usage ballooning during incident
> times, and thus no swapping.
>
> The auth query response time is < 1 ms, they live in adjacent racks.
> For now, we've segmented our heaviest DNS queriers into a dedicated
> pool of Unbound nodes so local resolution on the normal cluster isn't
> affected.
>
> Thanks again!


On December 6, 2024, "Paul S. via Unbound-users" <unbound-
users at lists.nlnetlabs.nl> wrote:
> Hi Paul,
>
> Coming back to this I notice that you have configured
>  num-queries-per-thread: 4096
> and you say the highest you see the request list is 2K.
> So dropping would not be an issue.
>
> Maybe your CPU can't handle all those recursion states and lowering
> that
> number would also help?
> Are you reaching memory limits perhaps and the system starts swaping?
>
> Could you also provide some more information?
> What is the query response time normally to your auth cluster?
>
> The serve-expired options could be useful in upstream failure
> situations
> but make sure to understand what the options are doing because you
> will
> be serving expired answers.
>
> Best regards,
> -- Yorgos
>
> On 22/11/2024 20:43, Yorgos Thessalonikefs via Unbound-users wrote:
> > Hi Paul,
> >
> > If you are "forwarding" to authoritative nameservers indeed using
> stub-
> > zone is the correct configuration as it expects to send queries to
> > authoritative nameservers and not resolvers. That won't help with
> this
> > issue though.
> >
> > In a situation where Unbound is overwhelmed with client queries,
> using
> > 'forward-no-cache: yes' (or 'stub-no-cache: yes' if you use the
> stub-
> > zone) does not help since all those queries need to be resolved.
> >
> > When under client pressure, Unbound would start dropping slow
> queries.
> > Slow queries are ones that take longer than 'jostle-timeout'
> > (<https://unbound.docs.nlnetlabs.nl/en/latest/manpages/> > unbound.conf.html#unbound-conf-jostle-timeout) to resolve.
>> > This way Unbound tries to combat Dos from slow queries or high
> query 
> > rates by trying to slowly fill up the cache from fast queries that 
> > eventually will drop the outgoing query rate and increase cache 
> > responses. (Glocal cache responses do not contribute to the increase
> of 
> > the request list).
>> > In your case where you don't cache the upstream information,
> Unbound 
> > cannot protect itself with cached answers because all the internal 
> > upstream queries need to be resolved.
>> > I am guessing the queries to the configured upstreams are not
> slower 
> > than jostle-timeout, so not candidates to be dropped initially, but
> it 
> > doesn't help that each one of them needs to always be resolved.
>> > I would first try to use 'stub-no-cache: no' and see if the
> situation 
> > gets better.
>> > It would be possible to introduce a new configuration option per 
> > forward/stub zone to give some kind of priority but unsure if it
> would 
> > generally help or in this case in particular.
>> > Best regards,
> > -- Yorgos
>>> > On 19/11/2024 03:01, Paul S. via Unbound-users wrote:
> >> Hey team,
> >>
> >> We run 8 node unbound clusters as recursive resolvers. The setup 
> >> forwards (using forward-zone) internal queries to a separate
> PowerDNS 
> >> authoritative cluster.
> >>
> >> Recently, we've had some connectivity issues to Cloudflare (who 
> >> provides a lot of external DNS services in our environment). When
> this 
> >> has happened, we've seen the requestlist balloon to around 1.5-2k 
> >> entries as queries repeatedly time out.
> >>
> >> However, the problem is that this affects forward-zones as well.
> We 
> >> lose resolution for internal queries when these backup events
> happen.
> >>
> >> We're looking for suggestions on how to safeguard these internal 
> >> forwards. We notice stub-zone may be the more appropriate stanza
> for 
> >> our use case, but are unsure if that'd bypass this requestlist
> queuing 
> >> (?)
> >>
> >> Any thoughts greatly welcome, thank you!
> >>
> >> Our config is fairly simple:
> >>
> >> server:
> >>      num-threads: 4
> >>      # Best performance is a "power of 2 close to the num-threads
> value"
> >>      msg-cache-slabs: 4
> >>      rrset-cache-slabs: 4
> >>      infra-cache-slabs: 4
> >>      key-cache-slabs: 4
> >>
> >>      # Use 1.125GB of a 4GB node to start, but real usage may be
> 2.5x 
> >> this so
> >>      # closer to 2.8G/4GB (~70%)
> >>      #
> >>      msg-cache-size: 384m
> >>      # Should be 2x the msg cache
> >>      rrset-cache-size: 768m
> >>
> >>      # We have libevent! Use lots of ports.
> >>      outgoing-range: 8192
> >>      num-queries-per-thread: 4096
> >>
> >>      # Use larger socket buffers for busy servers.
> >>      so-rcvbuf: 8m
> >>      so-sndbuf: 8m
> >>
> >>      # Turn on port reuse
> >>      so-reuseport: yes
> >>
> >>      # This is needed to forward queries for private PTR records
> to 
> >> upstream DNS servers
> >>      unblock-lan-zones: yes
> >>
> >>      forward-zone:
> >> name: "int.domain.tld"
> >> forward-addr: "10.10.5.5"
> >> # No caching in unbound
> >> forward-no-cache: "yes"
> >>
> >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.nlnetlabs.nl/pipermail/unbound-users/attachments/20241212/fed392dd/attachment.htm>


More information about the Unbound-users mailing list