[Unbound-users] Unbound Python vs Unbound Cache
vinay3 at justemail.net
vinay3 at justemail.net
Mon Jul 16 20:59:08 UTC 2012
Turned out to a leak in the resgen code logic and/or a CNAME response that
automatically gave the client an IP address for a A query we were blocking.
We are now getting a 40-80ms average DNS resolution time for the Alexa top
sites (securly.com <http://www.securly.com> ). Pretty pleased with unbound!
From: unbound-users-bounces at unbound.net
[mailto:unbound-users-bounces at unbound.net] On Behalf Of vinay3 at justemail.net
Sent: Wednesday, July 11, 2012 12:09 AM
To: unbound-users at unbound.net
Subject: [Unbound-users] Unbound Python vs Unbound Cache
We are seeing a peculiar issue with the unbound python script (we are using
the latest unbound code). We found that if a domain is Not cached, resgen.py
is called and gets a chance to provide a IP address resolution (A query).
However, if the domain is in the unbound's cache, resgen.py is not called
until the cached entry times out - which is 3600 seconds for us given a
cache-min-ttl of 3600. This is still okay - as we were hoping for certain
domains we blacklist, resgen would always give an IP address of a "blocked
page" on our webserver, and this way we would never see the real IP address
get cached. However, we are seeing "leaks" from time to time where resgen is
either not called or for some other reason, the real IP address somehow gets
cached. E.g. let's say baddomain.com needs to be blocked. When this domain
is not cached (first time), resgen is called everytime, responds with a
blocked IP of say x.x.x.x, and this IP is never cached by unbound (we are
okay if it is cached), and this way the real IP of baddomain.com is never
served and thus never cached. However, in about 100-1000 queries, one query
for baddomain.com still gets resolved to the real IP, and thus gets cached
by unbound for an entire hour (cache min ttl period) and we are left with a
gaping hole for this domain for that long.
Under what circumstances can resgen be skipped OR fail and hand over the
domain resolution to the iterator? If we can avoid this situation, we can
prevent the leaks 100% and would then not have to see blocked domains leak
for 1 hour at a time!
If unbound-users is not the right place for python module related questions,
where can we directed these questions? If we make modifications to the code
to fix this issue, where can we send the patches?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Unbound-users