Reducing select() usage under load

Aaron Hopkins lists at die.net
Fri May 12 18:57:53 UTC 2006


On Fri, 12 May 2006, Erik Rozendaal wrote:

> But you will have to be careful not to starve other sockets that may have 
> incoming requests waiting.  Since NSD will usually run with multiple sockets 
> (UDP, TCP, IPv4, IPv6, multiple interfaces) this can become quite hard and/or 
> expensive.  That's why NSD currently uses a select and processes all readable 
> sockets (not just the first!) every iteration.

It is hard and expensive if you want to ensure perfect fairness and
interleave responses from every socket.  But I think there a compromise
available between perfect fairness and only answering requests from one
socket when it is flooded.

Changing that while(1) I added to something that would only loop up to a
fixed number of times (e.g. 100) would be trivial.  You'd still amortize the
cost of the select() over many UDP packets, without being able to starve
other sockets for more than a few milliseconds.  You'd concentrate on work
from one socket, then switch to the next one and do everything pending up to
the same limit.  And the performance gains would be approximately the same.

As for TCP fairness in this scheme, you'd probably also want to loop
accepting TCP connections up until current_tcp_count >= maximum_tcp_count. 
But since each TCP connection gets its own socket, each one will get some
attention every select(), and select()s will still be happening hundreds of
times per second.

                                     -- Aaron



More information about the nsd-users mailing list