Unbound DNS entry pre(caching)

Eric Luehrsen ericluehrsen at gmail.com
Tue Apr 23 04:24:54 UTC 2019


On 4/22/19 8:02 AM, ѽ҉ᶬḳ℠ via Unbound-users wrote:
> To mitigate upstream queries (save bandwidth, speed up queries, enhance 
> privacy) it might be worthwhile to consider (pre)serving a copy of the 
> root (.) for local usage via auth-zone as described in example.conf.in 
> (Authority zones) in the package documentation.
> 
> On 22/04/2019 13:30, Tihomir Loncaric via Unbound-users wrote:
>> Hi all,
>>
>> Wanted to congratulate you on great work with unbound !
>>
>> My use case of unbound is on ships using satellite uplinks, so in 
>> other words high-latency and high-bandwidth... relatively speaking, 
>> but surely enough bandwidth for DNS queries.
>> So idea would be to cache and then preemptively re-cache DNS queries 
>> as much as possible so to speed up Internet access for users.
>>
>> This could cut up to 500-800 ms from every DNS query and remove lag on 
>> DNS side. This, together with WAN TCP optimization (SYN) would make 
>> satellite uplink not so laggy for users onboard.
>>
>> So I notice most of the DNS entries rarely change and local unbound 
>> onboard could surely cache lots of entries considering memory and CPU 
>> are available nowadays.
>>
>> Thus instead of expiring cached entries after TTL I would like to keep 
>> refreshing them regularly and keep them available for some pre-defined 
>> time
>> (eg. 2-3 weeks configurable) due to cruise length. I believe this 
>> proactive approach with cache & refresh would be more appropriate for 
>> such environment.
>>
>> Checking out for options in Unbound I have identified couple of 
>> mechanisms to enable this but all seem to lack some features.
>>
>> Prefetch is great feature but seems somewhat limited for entries to be 
>> refreshed during last 10% of TTL and only if user resolve entry during 
>> that last 10% of TTL time.
>> Furthermore that 10% seems not configurable in config.
>> I know setting it like this increases cache hit ratio for often used 
>> entries (ones that also get hit during last 10%) but is not flexible 
>> enough.
>>
>> I am trying to cache much beyond that time frame (2-3 weeks - 
>> parameter 1) and cannot always guarantee users will be resolving 
>> within last 10% of TTL  (eg. during night)
>> so I would like to set automated refresh to do refresh on 90% TTL, if 
>> DNS entry was asked for more or equal to 0 ... n times after being 
>> cached (parameter 2).
>>
>> Of course all up to maximum number of cached entries which would be 
>> set appropriately.
>>
>> This would allow for preemptive caching based on number of times entry 
>> is queried during TTL and overall length of time to keep such entries 
>> in cache.
>> So in other words we would trade-off some bandwidth used in order to 
>> reduce DNS latency.
>>
>> Serve-expired is another great feature, but what I am proposing above 
>> would work similarly and wouldn't break DNS in case entries are changed,
>> though with some bandwidth trade-off for refreshes.
>>
>> cache-min-ttl would definitely break certain resolutions, but I would 
>> use it with 30 - 60 min TTL which is sensible trade off
>> so refresh doesn't happen too often and any changes are still picked 
>> up with regular refreshes.
>>
>> Is there anything else that I could use out of the box? What other 
>> existing parameters would help towards this caching goal?
>>
>> Thanks,
>> Tiho

The options related to fetch or prefetch may also help. A little more 
sophisticated, download a favorite spam/adblock list and install those 
domains as "local-zone: <zone> static". This may prevent noisy nonsense 
from slowing down web browsing.
-Eric



More information about the Unbound-users mailing list