Monday, August 5, 2013

Nginx Configuration and Optimizing Tips and Tricks


Organize Nginx Configuration Files

Normally Nginx configuration files are located under /etc/nginx path.
One good way to organize configuration files is use Debian/Ubuntu Apache style setup:

## Main configuration file ##
/etc/nginx/nginx.conf
 
## Virtualhost configuration files on ##
/etc/nginx/sites-available/
/etc/nginx/sites-enabled/
 
## Other config files on (if needed) ##
/etc/nginx/conf.d/

Virtualhost files have 2 paths, because sites-available directory can contain any stuff, like test configs, just copied/created configs, old configs and so on. And sites-enabled contains only really enabled configurations, actually just only symbolic links to sites-available directory.

Remember add following includes at the end of your nginx.conf file:

## Load virtual host conf files. ##
include /etc/nginx/sites-enabled/*;
 
## Load another configs from conf.d/ ##
include /etc/nginx/conf.d/*;

Determine Nginx worker_processes and worker_connections

Default setup is okay for worker_processes and worker_connections, but these values could be little bit optimized:

max_clients = worker_processes * worker_connections

Just Nginx basic setup can handle hundreds of concurrent connection:

worker_processes  1;
worker_connections  1024;

Normally 1000 concurrent connection / per one server is good, but sometimes other parts like disks on server might be slow, and it causes that the Nginx is locked on I/O operations. To avoid locking use example following setup: one worker_precess / per processor core, like: Worker Processes

worker_processes [number of processor cores];

To check how many processor cores do you have, run following command:

cat /proc/cpuinfo |grep processor
processor : 0
processor : 1
processor : 2
processor : 3

So here is 4 cores and worker_processes final setup could be following:

worker_processes 4;

Worker Connections
Personally I stick with 1024 worker connections, because I don’t have any reason to raise this value. But if example 4096 connections per second is not enough then it’s possible to try to double this and set 2048 connections per process.

worker_processes final setup could be following:

worker_connections 1024;

I have seen some configurations where server admins are used too much Apache and think if I set Nginx worker_processes to 50 and worker_connections to 20000 then my server could handle all traffic once what we get monthly…but yes it’s not true. It’s just wasting of resources and might cause some serious problems…


Hide Nginx Server Tokens / Hide Nginx version number

his is good for security reasons hide server tokens / hide Nginx version number, especially, if run some outdated version of Nginx. This is very easy to do just set server_tokens off under http/server/location section, like:


Nginx Request / Upload Max Body Size (client_max_body_size)

If you want to allow users upload something or upload personally something over the HTTP then you should maybe increase post size. It can be done with client_max_body_size value which goes under http/server/location section. On default it’s 1 Mb, but it can be set example to 20 Mb and also increase buffer size with following configuration:

client_max_body_size 20m;
client_body_buffer_size 128k;

If you get following error, then you know that client_max_body_size is too low:
“Request Entity Too Large” (413)


Nginx Cache Control for Static Files (Browser Cache Control Directives)

Browser caching is import if you want save resources and bandwith. It’s easy setup with Nginx, following is very basic setup where logging (access log and not found log) is turned off and expires headers are set to 360 days.

location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
    access_log        off;
    log_not_found     off;
    expires           360d;
}

If you want more complicated headers or some other expiration by filetypes then you could configure those separately.


Nginx Pass PHP requests to PHP-FPM

Here you could use default tpc/ip stack or use directly Unix socket connection. You have to also setup PHP-FPM listen exactly same ip:port or unix socket (with Unix socket also socket permission have to be right). Default setup is use ip:port (127.0.0.1:9000) you could of course change ips and ports what PHP-FPM listens. Here is very basic configuration with Unix socket example commented out:

# Pass PHP scripts to PHP-FPM
location ~* \.php$ {
    fastcgi_index   index.php;
    fastcgi_pass    127.0.0.1:9000;
    #fastcgi_pass   unix:/var/run/php-fpm/php-fpm.sock;
    include         fastcgi_params;
    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;
}

It’s also possible to run PHP-FPM another server and Nginx another.


Prevent (deny) Access to Hidden Files with Nginx

It’s very common that server root or other public directories have hidden files, which starts with dot (.) and normally those is not intended to site users. Public directories can contain version control files and directories, like .svn, some IDE properties files and .htaccess files. Following deny access and turn off logging for all hidden files.

location ~ /\. {
    access_log off;
    log_not_found off; 
    deny all;
}

PHP-FPM Configuration Tips and Tricks


PHP-FPM Configuration files

Normally PHP-FPM configuration files are located on /etc/php-fpm.conf file and /etc/php-fpm.d path. This is normally excellent start and all pool configs goes to /etc/php-fpm.d directory. You need to add following include line on your php-fpm.conf file:

include=/etc/php-fpm.d/*.conf

PHP-FPM Global Configuration Tweaks

Set up emergency_restart_threshold, emergency_restart_interval and process_control_timeout. Default values for these options are totally off, but I think it’s better use these options example like following:

emergency_restart_threshold 10
emergency_restart_interval 1m
process_control_timeout 10s

What this mean? So if 10 PHP-FPM child processes exit with SIGSEGV or SIGBUS within 1 minute then PHP-FPM restart automatically. This configuration also sets 10 seconds time limit for child processes to wait for a reaction on signals from master.


PHP-FPM Pools Configuration

With PHP-FPM it’s possible to use different pools for different sites and allocate resources very accurately and even use different users and groups for every pool. Following is just example configuration files structure for PHP-FPM pools for three different sites (or actually three different part of same site):
/etc/php-fpm.d/site.conf
/etc/php-fpm.d/blog.conf
/etc/php-fpm.d/forums.conf

Just example configurations for every pool:
/etc/php-fpm.d/site.conf

[site]
listen = 127.0.0.1:9000
user = site
group = site
request_slowlog_timeout = 5s
slowlog = /var/log/php-fpm/slowlog-site.log
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 5
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 200
listen.backlog = -1
pm.status_path = /status
request_terminate_timeout = 120s
rlimit_files = 131072
rlimit_core = unlimited
catch_workers_output = yes
env[HOSTNAME] = $HOSTNAME
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

/etc/php-fpm.d/blog.conf

[blog]
listen = 127.0.0.1:9001
user = blog
group = blog
request_slowlog_timeout = 5s
slowlog = /var/log/php-fpm/slowlog-blog.log
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 200
listen.backlog = -1
pm.status_path = /status
request_terminate_timeout = 120s
rlimit_files = 131072
rlimit_core = unlimited
catch_workers_output = yes
env[HOSTNAME] = $HOSTNAME
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

/etc/php-fpm.d/forums.conf

[forums]
listen = 127.0.0.1:9002
user = forums
group = forums
request_slowlog_timeout = 5s
slowlog = /var/log/php-fpm/slowlog-forums.log
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 10
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 400
listen.backlog = -1
pm.status_path = /status
request_terminate_timeout = 120s
rlimit_files = 131072
rlimit_core = unlimited
catch_workers_output = yes
env[HOSTNAME] = $HOSTNAME
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

So this is just example how to configure multiple different size pools.


PHP-FPM Pool Process Manager (pm) Configuration

Best way to use PHP-FPM process manager is use dynamic process management, so PHP-FPM processes are started only when needed. This is almost same style setup than Nginx worker_processes and worker_connections setup. So very high values does not mean necessarily anything good. Every process eat memory and of course if site have very high traffic and server lot’s of memory then higher values are right choise, but servers, like VPS (Virtual Private Servers) memory is normally limited to 256 Mb, 512 Mb, 1024 Mb. This low RAM is enough to handle even very high traffic (even dozens of requests per second), if it’s used wisely.

It’s good to test how many PHP-FPM processes a server could handle easily, first start Nginx and PHP-FPM and load some PHP pages, preferably all of the heaviest pages. Then check memory usage per PHP-FPM process example with Linux top or htop command. Let’s assume that the server has 512 Mb memory and 220 Mb could be used for PHP-FPM, every process use 24 Mb RAM (some huge content management system with plugins can easily use 20-40 Mb / per PHP page request or even more). Then simply calculate the server max_children value: 220 / 24 = 9.17

So good pm.max_children value is 9. This is based just quick average and later this could be something else when you see longer time memory usage / per process. After quick testing it’s much easier to setup pm.start_servers value, pm.min_spare_servers value and pm.max_spare_servers value.

Final example configuration could be following:

pm.max_children = 9
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 200

Max request per process is unlimited by default, but it’s good to set some low value, like 200 and avoid some memory issues. This style setup could handle large amount of requests, even if the numbers seems to be small.

22 comments:

  1. Thanks for sharing valuable information. Your blogs were helpful to AWS learners. I
    request to update the blog through step-by-step. Also, find the AWS news at
    AWS Online Course

    ReplyDelete

  2. Great Post,really it was very helpful for us.
    Thanks a lot for sharing!
    I found this blog to be very useful!!
    AWS Cloud training in Bangalore

    ReplyDelete
  3. Best AWS Training provided by Vepsun in Bangalore for the last 12 years. Our Trainer has more than 20+ Years
    of IT Experience in teaching Virtualization and Cloud topics.. we are very delighted to say that Vepsun is
    the Top AWS cloud training Provider in Bangalore. We provide the best atmosphere for our students to learn.
    Our Trainers have great experience and are highly skilled in IT Professionals. AWS is an evolving cloud
    computing platform provided by Amazon with a combination of IT services. It includes a mixture of
    infrastructure as service and packaged software as service offerings and also automation. We have trained
    more than 10000 students in AWS cloud and our trainer Sameer has been awarded as the best Citrix and Cloud
    trainer in india.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete