AW: AW: Unbound - Shared Cache

Talkabout talk.about at gmx.de
Tue Mar 17 15:20:24 UTC 2020


Hi George,

any Chance that the EXPIRE logic finds ist way to the unbound Code? Currently I have an lru eveition in place, but this is not an optimal solution in my opinion.

Thanks!

Bye

Gesendet von Mail für Windows 10

Von: Talkabout via Unbound-users
Gesendet: Mittwoch, 12. Februar 2020 13:00
An: George Thessalonikefs; unbound-users at lists.nlnetlabs.nl
Betreff: AW: AW: Unbound - Shared Cache

Hi George,

Maybe it’s stupid but it still not completely clear for me. As Unbound knows when a particular entry Needs to be invalidated (based on the configuration it received upon load) Setting the TTL via EXPIRE would also work for the case you mentioned (serving outdated entries based on Unbound configuration). Maybe I am missing something?

I have now created the following Setup:

Server 1:
                Unbound (connected to KeyDB as backend)
                KeyDB (Redis drop-in replacement with active replication, Bound to Server 2)

Server 2:
                Unbound (connected to KeyDB as backend)
                KeyDB (Redis drop-in replacement with active replication, Bound to Server 1)

That way every entry added by one of the Servers is automatically available also for the other one (active replication of KeyDB) => shared Cache 😊 Entries are evicted after 4 hours of idle time. Will Keep it that way for now and if it works well the next days this will become my productive setup.

Thanks all for your help!

Bye

Gesendet von Mail für Windows 10

Von: George Thessalonikefs via Unbound-users
Gesendet: Mittwoch, 12. Februar 2020 11:23
An: unbound-users at lists.nlnetlabs.nl
Betreff: Re: AW: Unbound - Shared Cache

Hi Peter,

The reason is that you could serve expired records from that cache (if
you configure unbound to do so) so they shouldn't expire after the TTL.

As for the recommended way to cleanup redis (from the man page):
"
It should be noted that Unbound never removes data stored in the Redis
server, even if some data have expired in terms of DNS TTL or the Redis
server has cached too much data; if necessary the Redis server must be
configured to limit the cache size, preferably with some kind of
least-recently-used eviction policy.
"

I would recommend going through the cachedb section in the unbound.conf
man page as it also documents the behavior and some caveats such as the
"synchronous communication" between unbound and redis.

As for the recommended way to cleanup redis I would look here:
https://redis.io/topics/lru-cache

and probably use the 'allkeys-lru' policy.

Best regards,
-- George

On 11/02/2020 19:53, Talkabout via Unbound-users wrote:
> Hi Benno,
> 
>  
> 
> I have set up Unbound with redis Cache now and will check how well this
> works. I have one Question left: documentation states that unbound does
> NOT invalidate keys in the redis Cache even if they expire. Question
> from my side is why is unbound not simply using the „EXPIRE“ function of
> redis to set the TTL to the same time that unbound receives from an
> authority dns Server? That way no other maintenance Needs to be done. If
> there still is a valid reason (which I am sure there is 😊), what is the
> recommended way to cleanup redis?
> 
>  
> 
> Thanks!
> 
>  
> 
> Bye
> 
>  
> 
> Gesendet von Mail <https://go.microsoft.com/fwlink/?LinkId=550986> für
> Windows 10
> 
>  
> 
> *Von: *Talkabout via Unbound-users <mailto:unbound-users at lists.nlnetlabs.nl>
> *Gesendet: *Montag, 10. Februar 2020 14:15
> *An: *Benno Overeinder <mailto:benno at NLnetLabs.nl>;
> unbound-users at lists.nlnetlabs.nl <mailto:unbound-users at lists.nlnetlabs.nl>
> *Betreff: *AW: Unbound - Shared Cache
> 
>  
> 
> Hi Benno,
> 
>  
> 
> my real Name is Peter 😊
> 
>  
> 
> Thank you very much for this hint, I will try to set up a redis Cache
> that distributes the entries among my servers.
> 
>  
> 
> Bye
> 
>  
> 
> Gesendet von Mail <https://go.microsoft.com/fwlink/?LinkId=550986> für
> Windows 10
> 
>  
> 
> *Von: *Benno Overeinder <mailto:benno at NLnetLabs.nl>
> *Gesendet: *Montag, 10. Februar 2020 13:50
> *An: *Talkabout <mailto:talk.about at gmx.de>;
> unbound-users at lists.nlnetlabs.nl <mailto:unbound-users at lists.nlnetlabs.nl>
> *Cc: *Paul Vixie <mailto:paul at redbarn.org>
> *Betreff: *Re: Unbound - Shared Cache
> 
>  
> 
> Hi Talkabout (is this your real name?),
> 
>  
> 
> Thank you Paul for your answer.  Paul is correct that it is very
> dependent on your cache replacement algorithm and how to inform other
> resolvers that answers are already in cache.
> 
>  
> 
> To answer your question, Talkabout, Unbound has a module for a shared
> cache with a Redis backend.  It works as a secondary cache, 1) first
> local cache lookup, 2) shared cache lookup, 3) resolve/iterate.  For
> configuration and use, see the unbound.conf(5) manpages, section "Cache
> DB Module Options".  (You may have to compile Unbound yourself with the
> --with-libhiredis option.)
> 
>  
> 
> Your suggestion to export/import the cache with unbound-control can be
> used for running Unbound clusters and you want to start a new Unbound
> instance with a hot cache.
> 
>  
> 
> Best regards,
> 
>  
> 
> — Benno
> 
>  
> 
>  
> 
>> On 10 Feb 2020, at 12:21, Talkabout via Unbound-users
> <unbound-users at lists.nlnetlabs.nl> wrote:
> 
>>
> 
>> Hi Paul,
> 
>> 
> 
>> thank you very much for your Statement!
> 
>> 
> 
>> I am not that Deep into DNS logics so most likely not a very good
> communication Partner when the Topic becomes that complex 😊 I am using
> Unbound for my home Network only, there I think theoretical numbers like
> „hundreds cache misses per second“ are not that realistic. But I totally
> agree that making such a feature generic, this is something that Needs
> to be taken care of.
> 
>> 
> 
>> Maybe a solution can be to integrate a Sub layer inbetween the local
> Cache and external resolvers, a shared Cache. This shared Cache is
> updated by all Peers when a query gets resolved and every peer can ask
> the shared Cache for entries when local Cache does not deliver any
> results. Shared Cache instances are then automatically synchronized.
> 
>> 
> 
>> Obviously this Topic is not an easy one and it seems that there is
> Nothing in place I can reuse.
> 
>> 
> 
>> Thanks again!
> 
>> 
> 
>> Bye
> 
>> 
> 
>> Gesendet von Mail für Windows 10
> 
>> 
> 
>> Von: Paul Vixie
> 
>> Gesendet: Montag, 10. Februar 2020 12:11
> 
>> An: unbound-users at lists.nlnetlabs.nl
> 
>> Cc: Talkabout
> 
>> Betreff: Re: Unbound - Shared Cache
> 
>> 
> 
>> On Monday, 10 February 2020 09:54:44 UTC Talkabout via Unbound-users
> wrote:
> 
>> > I am using unbound on 2 different Servers (also populated bia DHCP as 2
> 
>> > different Name Servers) and would like to make sure that if one Server
> 
>> > already answered a query and cached it, the other does not Need to
> do the
> 
>> > same query to the Internet again. ...
> 
>> > Question is, if there is a standard way of doing this or any suggestions
> 
>> > About the „best“ solution. Maybe somebody already has something like
> this
> 
>> > working?
> 
>> 
> 
>> this question has come up every year or so. one thing to know is that
> if this
> 
>> is a good idea, then it would be a good multi-vendor idea, not just for
> 
>> unbound, though unbound has a track record of doing things first that
> turn out
> 
>> to be good ideas and end up standardized in DNS itself in some form.
> 
>> 
> 
>> some open questions that relate to discard policy:
> 
>> 
> 
>> if you had hundreds of cache misses per second which ones would you
> share with
> 
>> your peer recursive nameservers? (maybe only share it after its first
> reuse? i
> 
>> think the opendns anycast network uses a DHT for this, to inform peers of
> 
>> availability of data, so it can be fetched from a peer if it's needed.)
> 
>> 
> 
>> if your peer is sharing hundreds of cache misses per second with you,
> would
> 
>> you ever discard something from your own cache to make room for
> something from
> 
>> theirs? (generally this isn't the right thing, so you'd give your
> cache two
> 
>> LRU quotas, one for your own cache misses, one for those shared to you.)
> 
>> 
> 
>> when running at quota, and needing to discard something because a peer
> just
> 
>> told you some new thing and you don't have room for N+1, would you choose
> 
>> least recently learned (LRL) rather than least recently used (LRU) because
> 
>> when things are used they've move from your peer-cache to your own-cache?
> 
>> 
> 
>> other open questions:
> 
>> 
> 
>> when using ECS, how do you know which cache additions to share, if
> your peer
> 
>> or your peer's stubs don't have the same topology as you/yours do?
> 
>> 
> 
>> would you rate limit the feed to a peer so as not to flood their capacity?
> 
>> 
> 
>> this is a fascinating topic, as i hope you'll agree.
> 
>> 
> 
>> --
> 
>> Paul
> 
>  
> 
> -- 
> 
> Benno J. Overeinder
> 
> NLnet Labs
> 
> https://www.nlnetlabs.nl/
> 
>  
> 
>  
> 
>  
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.nlnetlabs.nl/pipermail/unbound-users/attachments/20200317/0abc35c5/attachment-0001.htm>


More information about the Unbound-users mailing list