From sca at andreasschulze.de Fri Mar 13 12:25:34 2015 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 13 Mar 2015 13:25:34 +0100 Subject: [ldns-users] howto handle offline ksk Message-ID: <20150313132534.Horde.4Nb5rKY7l_9piUp74AK1Mg2@horde.andreasschulze.de> Hello, signing a zone using ldns-signzone is easy. At least if ksk and zsk are both available. I would like t change the setup so host2 as no access to ksk.private. This is how I think things would go: Host1: create a ksk create a zsk sign this zsk transfer ksk.public + zsk.private + zsk.sig to Host2 Host2: include {ksk/zsk}.public in zone include zsk.sig in zone sign zone transfer ksk.public (or the DS(ksk.public)) to the delegating domain. any suggestions if this is correct and howto do that using ldns tools ? ( at least: ... not using bind tools ... ) Thanks, Andreas From calle.dybedahl at init.se Fri Mar 20 09:39:34 2015 From: calle.dybedahl at init.se (Calle Dybedahl) Date: Fri, 20 Mar 2015 10:39:34 +0100 Subject: [ldns-users] Memory leak in rdata.c Message-ID: <134AB2D8-64F7-430C-88A4-F7CB0CC93C8F@init.se> The function ldns_rdf_free() frees the RDF structure, but not the data it points to. This tiny patch makes valgrind complain less. diff --git a/rdata.c b/rdata.c index 6eb0096..65f057f 100644 --- a/rdata.c +++ b/rdata.c @@ -241,6 +241,7 @@ void ldns_rdf_free(ldns_rdf *rd) { if (rd) { + LDNS_FREE(rd->_data); LDNS_FREE(rd); } } ? Calle Dybedahl calle.dybedahl at init.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelte.jansen at sidn.nl Fri Mar 20 10:03:45 2015 From: jelte.jansen at sidn.nl (Jelte Jansen) Date: Fri, 20 Mar 2015 11:03:45 +0100 Subject: [ldns-users] Memory leak in rdata.c In-Reply-To: <134AB2D8-64F7-430C-88A4-F7CB0CC93C8F@init.se> References: <134AB2D8-64F7-430C-88A4-F7CB0CC93C8F@init.se> Message-ID: <550BF081.3050700@sidn.nl> On 03/20/2015 10:39 AM, Calle Dybedahl wrote: > The function ldns_rdf_free() frees the RDF structure, but not the data > it points to. This tiny patch makes valgrind complain less. > might be a gap in the documentation there (or a slight violation of principle of least surprise), but that was intentional; to free both the rdf structure and its data the function ldns_rdf_deep_free() is provided: ldns_rdf_deep_free(ldns_rdf *rd) { if (rd) { if (rd->_data) { LDNS_FREE(rd->_data); } LDNS_FREE(rd); } } Jelte From calle.dybedahl at init.se Fri Mar 20 10:10:37 2015 From: calle.dybedahl at init.se (Calle Dybedahl) Date: Fri, 20 Mar 2015 11:10:37 +0100 Subject: [ldns-users] Memory leak in rdata.c In-Reply-To: <550BF081.3050700@sidn.nl> References: <134AB2D8-64F7-430C-88A4-F7CB0CC93C8F@init.se> <550BF081.3050700@sidn.nl> Message-ID: <16B51B1A-86AF-4A22-8EEF-93BE475822D3@init.se> > On 20 mar 2015, at 11:03, Jelte Jansen wrote: > > might be a gap in the documentation there (or a slight violation of > principle of least surprise), but that was intentional; to free both the > rdf structure and its data the function ldns_rdf_deep_free() is provided: Ah. OK. ? Calle Dybedahl calle.dybedahl at init.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbraud at gmail.com Fri Mar 20 18:05:54 2015 From: rbraud at gmail.com (Ryan Braud) Date: Fri, 20 Mar 2015 11:05:54 -0700 Subject: [ldns-users] Question about ldns packet timings Message-ID: Hi everyone, I'm not sure if this is the correct list for this question, but I had a question/enhancement request for ldns. Currently, the function ldns_pkt_querytime() returns the amount of time a request took, measured with gettimeofday(). I was wondering if there was a good reason why this time is not calculated from a kernel timestamp via the SIOCGSTAMP ioctl? We are using libldns in our production environment and when the processor gets busy, we end up with random spikes in latency due to scheduling overhead, etc. Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmonds at debian.org Fri Mar 20 21:10:32 2015 From: edmonds at debian.org (Robert Edmonds) Date: Fri, 20 Mar 2015 17:10:32 -0400 Subject: [ldns-users] Question about ldns packet timings In-Reply-To: References: Message-ID: <20150320211032.GA8347@mycre.ws> Ryan Braud wrote: > Hi everyone, I'm not sure if this is the correct list for this question, > but I had a question/enhancement request for ldns. Currently, the function > ldns_pkt_querytime() returns the amount of time a request took, measured > with gettimeofday(). I was wondering if there was a good reason why this > time is not calculated from a kernel timestamp via the SIOCGSTAMP ioctl? > We are using libldns in our production environment and when the processor > gets busy, we end up with random spikes in latency due to scheduling > overhead, etc. Hi, Ryan: Doesn't SIOCGSTAMP only give you the timestamp of the last received packet? If you want the "real" request latency, wouldn't you want kernel timestamps on both the sent and received packets? (I think you can do this, at least on Linux, with SO_TIMESTAMPING.) -- Robert Edmonds edmonds at debian.org From rbraud at gmail.com Fri Mar 20 21:32:16 2015 From: rbraud at gmail.com (Ryan Braud) Date: Fri, 20 Mar 2015 14:32:16 -0700 Subject: [ldns-users] Question about ldns packet timings In-Reply-To: <20150320211032.GA8347@mycre.ws> References: <20150320211032.GA8347@mycre.ws> Message-ID: Ideally, yes, you would use kernel timestamps for both sent and received packets. However, ldns does not give you (me) the ability to do this from queries sent via ldns_resolver_query() since it keeps the sockets internally. In theory, I could just build the packet using ldns and send the packets myself on sockets I create (so that I can set the proper socket options), but it would be nice if ldns did this for me. Ryan On Fri, Mar 20, 2015 at 2:10 PM, Robert Edmonds wrote: > Ryan Braud wrote: > > Hi everyone, I'm not sure if this is the correct list for this question, > > but I had a question/enhancement request for ldns. Currently, the > function > > ldns_pkt_querytime() returns the amount of time a request took, measured > > with gettimeofday(). I was wondering if there was a good reason why this > > time is not calculated from a kernel timestamp via the SIOCGSTAMP ioctl? > > We are using libldns in our production environment and when the processor > > gets busy, we end up with random spikes in latency due to scheduling > > overhead, etc. > > Hi, Ryan: > > Doesn't SIOCGSTAMP only give you the timestamp of the last received > packet? If you want the "real" request latency, wouldn't you want > kernel timestamps on both the sent and received packets? > > (I think you can do this, at least on Linux, with SO_TIMESTAMPING.) > > -- > Robert Edmonds > edmonds at debian.org > _______________________________________________ > ldns-users mailing list > ldns-users at open.nlnetlabs.nl > http://open.nlnetlabs.nl/mailman/listinfo/ldns-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simtom at domreg.lt Thu Mar 26 11:38:05 2015 From: simtom at domreg.lt (Tomas Simonaitis) Date: Thu, 26 Mar 2015 13:38:05 +0200 Subject: [ldns-users] ldns-verify-zone two KSK Message-ID: <5513EF9D.6070203@domreg.lt> Hello, ldns-verify-zone (version 1.6.13) considers signed zone to be invalid when two KSK keys are present in zone (e.g. during rollover) but only one key is supplied via -k. The error is: "Error: No keys with the keytag and algorithm from the RRSIG found for DNSKEY" (LDNS_STATUS_CRYPTO_NO_MATCHING_KEYTAG_DNSKEY). Using -V 5 shows that failing Signature: is RRSIG for ZSK which is signed using DNSKEY not specified via -k. When checking zone we are supplying only one key via -k (one currently published in parent). During KSK rollover there is also second (upcoming) KSK in zone (without corresponding DS in parent). Shouldn't such zone be treated as valid by ldns-verify-zone? Best Regards, Tomas Simonaitis