sysctl.conf and other settings for high performance webservers

 

There’s a couple of key settings on CentOS servers that significantly helps for high performance web servers that we always put in by default across all of our managed machines:

  • net.ipv4.tcp_syncookies = 1
    While it’s more commonly seen by people wanting to prevent denial of service attacks from taking down their websites, some people don’t realise that a heavy traffic site is not much different from one that is under a constant denial of service!
  • net.ipv4.netfilter.ip_conntrack_max = 300000
    Netfilter under linux does a great job, but it can sometimes be artificially restricted by some OS limitations that try to prevent some traffic from taking up too many system resources.  This is one of those settings that I feel is often set too low.  We ramp it up to 300,000 which means NetFilter can track up to 300,000 “sessions” (such as a HTTP connection) at one time.  If you’ve got 10,000 people on your website at once, you’ll definitely want to adjust this one!
  • net.ipv4.tcp_max_syn_backlog = 10240
    An application such as Nginx is very capable of serving as many TCP connections as an operating system and hardware can handle.  With that said, there will be a backlog of TCP connections ins a pending state before the user-space application such as Nginx gets to call accept().  The key here is to make sure the backlog of unaccepted TCP connections never exceeds the above number else there will be the equivalent of packetloss of the connection packets, and some clients will experience delays, if not a complete outage.  We find 10240 is a high enough number for this on current modern day servers.
  • net.core.netdev_max_backlog = 4000
    This one is important, particularly for servers that operate past 100MBit/s.  It governs how many packets will be queued inbetween the kernel processing the interface packet queue.  At gigabit speeds on busy servers, seeing the queue exceed the default of 1000 is pretty common.  We usually put this up to 4,000 for web servers.
  • kernel.panic = 10
    While unrelated to performance, there’s nothing worse on a busy web server than seeing a kernel panic.  While this isn’t common, when you do push a server to it’s limits, you can certainly come across kernel panics more commonly than you might otherwise, and this setting just helps reduce downtime on production servers.

We usually also change the TCP congestion control algorithm too by adding the following to rc.local:

  • /sbin/modprobe tcp_htcp

You will also want to increase the send queue on your interface by adding the following to your rc.local (you’ll want to change eth0 to your interface name):

  • /sbin/ifconfig eth0 txqueuelen 10000

There’s a lot of commentary online about changing tcp memory buffers and sizes.  Personally I haven’t found them to make much difference on a suitably spec’d server.  One day I might get around to having a look at how these affect performance, but for now, the above settings are known to achieve gigabit HTTP serving speeds for our webservers so that’s good enough for me!

 

[del.icio.us] [Digg] [StumbleUpon] [Technorati] [Windows Live]

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.