Here are five rules to help you optimize your Drupal installation, as well as some examples to illustrate how to apply them. Server optimization is a huge field, and it is constantly evolving, but we can to study simple cases to understand more complex ones. Here we will focus on a few variables and measure their impact on performance.

Rule No. 1: You can’t optimize without benchmarking

There are various tools that you need to make a proper assessment, but for this blog we will just focus on the most important one: Apache Benchmark. Use this tool to query specific pages on your website (you can even pass it cookie information to simulate authenticated users) and measure the response. You can use Apache Benchmark at the command line on the server you are testing and still obtain valid results because it has a low CPU and RAM footprint. Here is a typical use:

$ ab -n 1000 -c 20 http://example.com/

The values here are: n = number of page requests c = concurrent connections The most important parameter is c, the number of concurrent requests, while n is a number that needs to be large enough to provide stable results. The trick is to test the website with various c values, starting with a small number and increase it until the returned values of “Requests per second” starts dropping. For example:

$ ab -n 1000 -c 20 http://example.com/ | grep ‘Requests per second’ Requests per second: 45.29 [#/sec] (mean) $ ab -n 1000 -c 40 http://example.com/ | grep ‘Requests per second’ Requests per second: 46.91 [#/sec] (mean) $ ab -n 1000 -c 60 http://example.com/ | grep ‘Requests per second’ Requests per second: 8.55 [#/sec] (mean) $ ab -n 1000 -c 80 http://example.com/ | grep ‘Requests per second’ Requests per second: 2.21 [#/sec] (mean)

We can refine the c value using smaller increments, but stoping at the nearest multiple of 10 is usually enough.

Rule No. 2: Reduce memory footprint until paging stops

The most likely reason why the requests per second suddenly drops is because we overload the memory and the system starts swapping to the paging file. This has the same effects on overall performance as when opening too many applications on Windows, or Mac OSX, or any system which supports paging. After you have run your benchmark, take a look at how much swap file is used. Here are typical values on a 4GB server:

$ free -m total used free shared buffers cached Mem: 4011 1481 2530 0 45 824 -/+ buffers/cache: 611 3400 Swap: 8191 2145 8191

Here we are using 2145 MB of swap file, which probably was created when the benchmark was run with the largest c value. What you need to do first is clear the swap file, and check that the value is back to zero:

$ sudo swapoff -a $ sudo swapon -a $ free -m total used free shared buffers cached Mem: 4011 1379 2632 0 45 834 -/+ buffers/cache: 499 3512 Swap: 8191 0 8191

Now the trick is to redo the test above with c values that are close to the one where we see a performance drop, and check the paging file after each request:

$ ab -n 1000 -c 30 http://example.com/ | grep ‘Requests per second’ Requests per second: 45.93 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 0 8191 $ ab -n 1000 -c 40 http://example.com/ | grep ‘Requests per second’ Requests per second: 40.31 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 312 8191 $ ab -n 1000 -c 50 http://example.com/ | grep ‘Requests per second’ Requests per second: 12.27 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 1902 8191

As you can see from the results above, the rate of request starts dropping significantly when the server starts paging with around 40 concurrent requests.

Rule No. 3: More connections is less connections

By default, Apache and MySQL are configured to accept 150 connections. Most PHP applications such as Drupal will open only one connection per thread, so you can safely set the number of connections on both sides to the same value. One note, MySQL actually allows n+1 connections, so you have an extra connection available for management. Unfortunately, in the attempt to “optimize” a server, some server administrators will bump the number of connection to 500, 10000, or ever more. This has a catastrophic effect when the server gets under load. Drupal websites usually require over 32 MB of RAM per request, and you can eyeball the average value by looking at the results returned by command top:

$ top top - 20:28:52 up 12:11, 2 users, load average: 1.34, 0.55, 0.35 Tasks: 93 total, 10 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 50.2%us, 21.4%sy, 0.0%ni, 28.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.3%st Mem: 4108192k total, 1787064k used, 2321128k free, 46520k buffers Swap: 8388604k total, 0k used, 8388604k free, 861772k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32080 www-data 20 0 419m 64m 27m S 26 1.6 0:01.19 apache2 32110 www-data 20 0 419m 65m 28m S 25 1.6 0:01.69 apache2 32025 www-data 20 0 419m 63m 27m S 24 1.6 0:01.89 apache2 32065 www-data 20 0 417m 62m 27m S 22 1.6 0:01.13 apache2 32178 www-data 20 0 408m 53m 27m R 22 1.3 0:00.66 apache2 32024 www-data 20 0 418m 64m 28m R 21 1.6 0:02.85 apache2 32176 www-data 20 0 417m 62m 27m S 21 1.6 0:00.99 apache2 32032 www-data 20 0 408m 53m 27m S 21 1.3 0:02.45 apache2 32104 www-data 20 0 417m 62m 28m S 21 1.6 0:02.55 apache2 32116 www-data 20 0 415m 59m 27m R 21 1.5 0:01.79 apache2 32119 www-data 20 0 417m 62m 27m S 21 1.6 0:01.04 apache2 32164 www-data 20 0 417m 62m 27m R 21 1.6 0:00.99 apache2 32179 www-data 20 0 408m 53m 27m R 21 1.3 0:00.63 apache2 32222 www-data 20 0 403m 48m 27m S 17 1.2 0:00.50 apache2 23906 mysql 20 0 675m 115m 6628 S 14 2.9 2:33.85 mysqld 32147 www-data 20 0 419m 64m 27m R 14 1.6 0:02.26 apache2 32226 www-data 20 0 416m 56m 23m R 9 1.4 0:00.26 apache2

The trick is to estimate how much memory is used by Apache, by averaging the value of RES (resident non-swapped physical memory) used by the processes. Here the average value of RES is about 62MB. You also need to look at the amount free memory when the server is not under load (when no apache2 processes are running):

$ top top - 20:40:45 up 12:23, 1 user, load average: 0.29, 0.38, 0.45 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4108192k total, 1388132k used, 2720060k free, 46680k buffers Swap: 8388604k total, 0k used, 8388604k free, 866208k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32061 jfparadi 20 0 19272 1244 932 R 0 0.0 0:01.52 top 1 root 20 0 23628 1780 1244 S 0 0.0 0:00.64 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:02.56 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 5 root RT 0 0 0 0 S 0 0.0 0:00.03 migration/1 6 root 20 0 0 0 0 S 0 0.0 0:01.19 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.03 migration/2 8 root 20 0 0 0 0 S 0 0.0 0:00.90 ksoftirqd/2 9 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/3 10 root 20 0 0 0 0 S 0 0.0 0:00.74 ksoftirqd/3 11 root 20 0 0 0 0 S 0 0.0 0:01.94 events/0 12 root 20 0 0 0 0 S 0 0.0 0:01.91 events/1 13 root 20 0 0 0 0 S 0 0.0 0:01.79 events/2 14 root 20 0 0 0 0 S 0 0.0 0:03.11 events/3 15 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 16 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 19 root 20 0 0 0 0 S 0 0.0 0:00.04 netns 20 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 23 root 20 0 0 0 0 S 0 0.0 0:00.00 xenwatch 24 root 20 0 0 0 0 S 0 0.0 0:00.00 xenbus 56 root 20 0 0 0 0 S 0 0.0 0:00.11 sync_supers

Here, the amount of tree memory is 2720060KB, or 2656MB. Now, you want to confirm the c value found above using the simple equation: c = FREE / RES c = 2656MB / 62MB c = 43 (close to the 40 found above) This value is the threshold where the server starts melting, and we need to prevent Apache from reaching this limit. We can reduce the number of clients from 150 to 40 see what is the impact on performance. For Apache, edit the configuration file under the section for specific MPM implementation you are using and adjust this value:

MaxClients 150 For MySQL, set or adjust this value: max_connections = 150 Once you have adjusted these values to a number found above (in our case we will use 40), re-run your benchmark: $ ab -n 1000 -c 20 http://example.com/ | grep ‘Requests per second’ Requests per second: 44.43 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 0 8191 $ ab -n 1000 -c 40 http://example.com/ | grep ‘Requests per second’ Requests per second: 45.11 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 0 8191 $ ab -n 1000 -c 60 http://example.com/ | grep ‘Requests per second’ Requests per second: 43.27 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 0 8191 $ ab -n 1000 -c 80 http://example.com/ | grep ‘Requests per second’ Requests per second: 44.59 [#/sec] (mean) $ free -m | grep Swap Swap: 8191 0 8191

As you can see, paging has been eliminated. But why are we able to answer more than 40 concurrent request? Simply because Apache puts the extra requests in a queue, and processes only 40 requests concurrently. The response to the client will still be delayed, but more requests are answered per second by not overloading the memory. Also, the request per seconds remains constant, because the rate processing is dependent on the CPU. In fact, both numbers are expressed with equivalent units, requests/second and cycles/seconds.

Rule No. 4: MySQL = RAM

Obviously, the results above assume that the database has been properly configured and that we have no slow queries. In fact, MySQL typically does not consume a lot of CPU for Drupal websites. On a well configured server, MySQL will consume only 10 to 25% of the CPU. Apart from search activity, queries are very redundant, and MySQL benefits from having its caching and threads optimized. It is not uncommon to see MySQL configured to use close to 50% of the available RAM. Obviously these numbers depend on the application, the rule is to give MySQL as much RAM as possible.

Rule No 5: Apache = CPU

On Drupal websites, Apache spends mosts of its time executing PHP code. In fact, a good indicator of server optimization is when Apache uses 100% of the available CPU, and this is achieved normally when all other bottlenecks in the system are removed. Because, if you allow Apache to answer too many concurrent requests, it will simple divide the available CPU across each one, and the rate of requests per second does not increases!

Conclusion

In this blog, we just touched some aspects of server optimization, and had to make several assumptions, but I hope that the five points were clear, and that they will help your make better choices in deployment.

Building Rich Internet Apps with Drupal & HTML5
Learn the power of combining Drupal + HTML5 + CSS3 + JS
 
Read Next
Appnovation Blog Default Header

How to avoid your web printing page being cut off

17 February, 2011|2 min