Recently I received an email pointing me to this article and with a request that I conduct a test of Nginx, which should supposedly be a better web serving application. But in previously mentioned article they state they are getting over a 500% more from the same hardware with a new piece of software, is still awesome. From the experience I know that with such claims, one must utilise some healthy skepticism.
All the software was compiled from source (details below). Benchmarks were conducted using ApacheBench tool (ab) from apache installation. The tool was running on the same machine as the server. Both servers had request logging disabled. Tests were conducted once with keepalive feature enabled and once with keepalive disabled. Each test was repeated five times and average taken. Test files were:
- HelloWorld.php – short php scripts, which only echoes string “Hello, World!” (13 bytes), intended to measure processing overhead of PHP vs static file (contents of this file are here: ‘
- HelloWorld.txt – a static file with string “Hello, World!” inside (also 13 bytes), intended to show static file serving overhead
- 100KB.txt – a static 100KB file
- 1MB.txt – static 1MB file
- index.php – a frontpage of certain application which is quite CPU intensive and includes PHP processing, some DB querying, file cache reads and HTML template processing
Test system and ./configure commands
- Hardware: HP DL380 G5
- Hardware: 2x Intel Xeon E5420 (4 cores each, total of 8 cores)
- Hardware: 8GB of ECC RAM
- Hardware: Smart Array P400i RAID-1 with 2x 147GB SAS drives
- OS: Slackware 12.2 with almost all software compiled from source
- Filesystem: ext3
- Apache version: 2.2.11, php via mod_php
- Nginx version: 0.7.59, php via request proxying to php-fpm (via socket)
- PHP version: 5.2.9
- Eaccelerator version: 0.9.5.3 (for both, Apache and Nginx)
- MySQL version: 5.0.77
- OpenSSL version: 0.9.8k
- both servers had request logging disabled
Configure command for Apache:
./configure –prefix=/usr/local/$PDESTDIR_HTTPD –sysconfdir=/etc/httpd \
–enable-authn-file –enable-authn-default \
–enable-authz-host –disable-authz-groupfile –enable-authz-user –enable-authz-default \
–disable-include –disable-filter –disable-charset-lite \
–enable-env –enable-setenvif \
–enable-ssl –with-ssl=/usr/local/openssl-$PVERSION_OPENSSL \
–enable-http –enable-mime –enable-status \
–disable-autoindex –disable-asis \
–enable-cgi –disable-cgid \
Configure command for Nginx:
./configure –prefix=/usr/local/$PDIR \
Configure command for PHP:
—–[These lines are for PHP with Apache (mod_php)]—————-
./configure –prefix=/usr/local/$PDESTDIR_HTTPD/$PDIR \
–with-apxs2=/usr/local/$PDESTDIR_HTTPD/bin/apxs –enable-cli –enable-cgi \
—–[These lines are for PHP with Nginx (php-fpm)]—————-
./configure –prefix=/usr/local/php-fpm \
–enable-cli –enable-fastcgi –enable-fpm \
—–[These lines are common for both]—————-
–with-curl –with-curlwrappers \
–enable-dba=shared –with-db4 –enable-inifile –enable-flatfile \
–enable-dom –with-libxml-dir \
–with-gd –with-jpeg-dir –with-png-dir –with-freetype-dir \
–enable-hash –with-mcrypt \
–with-iconv=/usr/local/lib –with-iconv-dir=/usr/local/lib \
–with-imap=/usr/local/imap-$PVERSION_CYRUSIMAP –with-imap-ssl \
–enable-mbstring –enable-mbregex –enable-mbregex-backtrack \
–with-mysql=/usr/local/mysql-$PVERSION_MYSQL –with-mysqli=/usr/local/mysql-$PVERSION_MYSQL/bin/mysql_config \
–enable-pdo –with-pdo-mysql=/usr/local/mysql-$PVERSION_MYSQL –with-pdo-sqlite –enable-sqlite-utf8 \
–enable-session –with-mm \
–enable-sysvmsg –enable-sysvsem –enable-sysvshm \
–enable-xml –enable-xmlreader –with-xmlrpc –enable-xmlwriter –with-xsl \
Runtime configuration files
Here you are able to see an overhead which each PHP request imposes. Interesting and not unexpected is a fact that Apache performs better at this test, much better. The reason here is that Apache has PHP processing “built-in” via mod_php module. On the other hand Nginx proxies PHP requests to another application server (php-fpm). The performance of Nginx in the graph above is roughly half of that of Apache, which can easily be explained with “two servers doing the work of one”. A reader must mind that here almost no PHP processing is done, a simple echo statement only.
At this test Apache starts to lag behind. Nginx perform better without keepalive feature, but with keepalive enabled it outperforms Apache by more than a factor of 2. This test is here only to demonstrate the overhead of static file serving.
Here we start the real static file serving benchmark. With file size of 100KB we come closer to what one might call “real world benchmark”. Again we are able to demonstrate that Nginx without keepalive feature performs on-par with Apache with keepalive feature enabled. But Nginx with keepalive feature enabled outperforms Apache by factor of 2. The throughput we may calculate here is roughly around 1,2GB/s, but mind that all tests utilise loopback network interface.
Again “real world static file serving benchmark”, yet this time with a file size of 1MB and without keepalive feature (keepalive does not matter anymore as majority of time is spent by transfering data and not by TCP overhead of establishing new connection).
Custom real world application performance comparison results:
Here you can see Nginx has a slight edge, but I must note that Apache had a .htaccess parsing enabled (AllowOverride All directive), a feature that Nginx is lacking. A drop in performance at the high concurrency levels has probably resulted from too few concurent database connections available to the system.
Apache HelloWorld.php VS HelloWorld.txt comparison results:
Note how close to the static file serving comes dynamic echoing the same content from PHP script.
Nginx HelloWorld.php VS HelloWorld.txt comparison results:
Nginx difference in static vs dynamic overhead is notably larger.
If memory usage is of real importance to you, then you should seriously consider using Nginx. With nginx all the static file serving capabilities can be achieved by using only that many worker processes as there are CPU cores on your server. In the example above this means 8 cores and 8 worker processes for Nginx, no matter how many clients connect to it simultaneously. For PHP, there are 16 php-fpm workers alive in the example above (which is enought if you are not doing some blocking IO). When you sum this up you get a decently low memory usage.
On the other hand Apache (with prefork MPM, which is required by mod_php) creates as many processes as there are clients (up to the limit of MaxClients directive) and it does not matter whether clients are requesting static files or for PHP applications. So for 200 clients it will create 200 processes with embedded PHP, which gives far larger memory footprint than Nginx has.
Conclusion or “should you switch from one to another?”
Short answer: I do not know.
Longer answers are here:
- If you host many websites and users utilise .htaccess files and change them frequently, then the answer is probably “no”. The cost of switching over to Nginx and converting all the configuration to new format usually reaches the cost of buying another server.
- If you have single application on multiple servers and most of the processing power is not consumed by serving static file content, the answer is also probably “no”.
- If you are mainly serving static content, the answer is obviously “yes”.
- If you are creating a fresh system for webhosting solution, the answer is probably “yes”, with the assumption that users will not miss the .htaccess functionality or it will be provided by other means
- If you are consolidating services with some virtualization technology, then answer is probably “yes”, as Nginx tends to have smaller memory footprint than Apache.
- If you are looking towards Nginx as your PHP server optimization, look again, but not at Nginx, at your application code.
I hope that this comparison helps you with your decision. If you have some questions, feel free to email me or post a comment below.
The ApacheBench tool was invoked with the following command:
ab -n NREQ -c NCONC [-k] http://server.domain.com/bench/FileName
NREQ is the number of requests:
– HelloWorld.php: 500000
– HelloWorld.txt: 500000
– 100KB.txt: 500000
– 1MB.txt: 50000
– AppFront: 5000
NCONC is the number of concurrent requests, as noted in the graphs.
Each test was repeaded 5 times and the average was calculated.
Thanks to Sean Osh for requesting this info.
David pointed out in comments below that php-fpm had max children setting set to 16, which is far less than what Apache had. Currently re-testing everything with the same software (OpenSSL 0.9.8 anyone?:) on the same hardware would require too much resources/work or to put it plainly – it is downright impossible for me ATM.
But: if a diligent reader does it properly, publishes the results and notifies me with URL, I will gladly include that URL in this article.