<div dir="ltr"><div dir="ltr">Hi,</div><div dir="ltr">We had some memory issues here with unbound after a Debian update with a new kernel.<div>It was a (well known) transparent_hugepage issue. Changed it from enabled to madvise, problem fixed.</div><div><br></div><div>You might maybe (or maybe not) hit the same issue?</div><div>Check <a href="https://access.redhat.com/solutions/46111">https://access.redhat.com/solutions/46111</a></div><div><br></div><div>And maybe just check for yourself right now on runtime:</div><div>echo madvise > /sys/kernel/mm/transparent_hugepage/enabled<br>echo madvise > /sys/kernel/mm/transparent_hugepage/defrag<br></div><div><br></div><div>then restart unbound...</div></div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le sam. 16 mars 2024 à 04:02, Nicolas Baumgarten via Unbound-users <<a href="mailto:unbound-users@lists.nlnetlabs.nl">unbound-users@lists.nlnetlabs.nl</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><br><div dir="ltr">Hi,<div>we have been using unbound for a long time, and we are very happy with it.</div><div><br></div><div>But I would like to know a little about memory usage.</div><div>LAtely we are seeing that unbound process grows using all memory and start swapping causing a big loss of performance (latency, dropped packets, etc)</div><div><br></div><div>The question is that stats metrics (mem*) are stable . They rapidly grow after startup and stay at a logical </div><div>maximum and don't keep growing. </div><div><br></div><div>But the process size does.</div><div><br></div><div>For example, two servers, same config, same hardware:</div><div>version 1.9.1, on redhat 8.7</div><div><br></div><div><b>Server A uptime 2 hours:</b></div><div>unbound-control stats_noreset | grep mem<br>mem.cache.rrset=285212642<br>mem.cache.message=142606338<br>mem.mod.iterator=16748<br>mem.mod.validator=25689380<br>mem.mod.respip=0<br>mem.mod.subnet=61555940<br>mem.streamwait=0<br>mem.http.query_buffer=0<br>mem.http.response_buffer=0<br></div><div><br></div><div>Unbound proc <b>RES size 1.6GB, VIRT 1,8 GB</b></div><div><br></div><div><b>Server B uptime 6 days. </b></div><div>mem.cache.rrset=285212302<br>mem.cache.message=142606461<br>mem.mod.iterator=16748<br>mem.mod.validator=25689867<br>mem.mod.respip=0<br>mem.mod.subnet=142614402<br>mem.streamwait=0<br>mem.http.query_buffer=0<br>mem.http.response_buffer=0<br></div><div><br></div><div>Unbound proc <b>RES size 5.5GB, VIRT 6.2 GB</b><br></div><div><br></div><div>As you can see the only difference in memory is the mod.subnet which is 60Mb vs 140Mb, but this limit is reached at 4 or 5 hours of</div><div>running and stays there. </div><div><br></div><div>Why is it using almost 4 GB more after a couple of days while caches are stable??</div><div>There is some way to control this?</div><div><br></div><div>Whe are restarting unbound every two days now (while waiting for a little bit more of ram)</div><div><br></div><div>Thanks!!</div><div><br></div></div>
</div></div>
</blockquote></div></div>
<br>
<div><i>Ce message et toutes les pièces jointes (ci-après le "message") sont établis à l’intention exclusive des destinataires désignés. Il contient des informations confidentielles et pouvant être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de détruire le message. Toute utilisation de ce message non conforme à sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse de l'émetteur</i></div>