Nginx 战斗准备 —— 优化指南 已翻译 100%

oschina 投递于 2013/11/17 22:43 (共 9 段, 翻译完成于 11-21)
阅读 19770
收藏 667
60
加载中

Most setup guides for Nginx tell you the basics - apt-get a package, modify a few lines here and there, and you’ve got a web server! And, in most cases, a vanilla nginx install will work just fine for serving your website. However, if you’re REALLY trying to squeeze performance out of nginx, you’ll have to go a few steps further. In this guide, I’ll explain which settings in nginx can be fine tuned in order to optimize performance for handling a large number of clients. As a note, this isn’t a comprehensive guide for fine-tuning. It’s a breif overview of some settings that can be tuned in order to improve performance. Your mileage may vary.

Basic (Optimized) Configuration

The only file we’ll be modifying is your nginx.conf, which holds all your settings for nginx in different modules. You should be able to find nginx.conf in the /etc/nginx directory on your server. First, we’ll talk about some of the global settings, then go through each module in the file and talk about which settings will get you the best performance for a large number of clients, and why they’ll increase your performance. A completed config file can be found at the end of this post.

已有 1 人翻译此段
我来翻译

Top Level Configs

Nginx has a few top level configuration settings that sit outside the modules in your nginx.conf file.

user www-data;
pid /var/run/nginx.pid;

worker_processes auto;

worker_rlimit_nofile 100000;

user and pid should be set by default - we won’t modify this, since it won’t do anything for us

worker_processes defines the number of worker processes that nginx should use when serving your website. The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard drives that store data, and load pattern. When in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it).

worker_rlimit_nofile changes the limit on the maximum number of open files for worker processes. If this isn’t set, your OS will limit. Chances are your OS and nginx can handle more than “ulimit -a” will report, so we’ll set this high so nginx will never have an issue with “too many open files”

已有 1 人翻译此段
我来翻译

Events Module

The events module contains all the settings for processing connections in nginx.

events {
    worker_connections 2048;
    multi_accept on;
    use epoll;
}

worker_connections sets the maximum number of simultaneous connections that can be opened by a worker process. Since we bumped up worker_rlimit_nofile, we can safely set this pretty high.

Keep in mind that the maximum number of clients is also limited by the number of socket connections available on your sytem (~64k), so setting this ridiculously high won’t benefit us.

multi_accept tells nginx to accept as many connections as possible after getting a notification about a new connection

use sets which polling method we should use for multiplexing clients on to threads. If you’re using Linux 2.6+, you should use epoll. If you’re using *BSD, you should use kqueue. Wanna know more about event polling? Let Wikipedia be your guide (warning, a neckbeard and an operating systems course might be needed to understand everything)

(it’s worth noting if you don’t which polling method nginx should use, it’ll chose the best one for your OS)

已有 1 人翻译此段
我来翻译

HTTP Module

The HTTP module controls all the core features of nginx’s http processing. Since there are quite a few settings in here, we’ll take this one in pieces. All these settings should be in the http module, even though it won’t be specifically noted as such in the snippets.

http {

    server_tokens off;

    sendfile on;

    tcp_nopush on;
    tcp_nodelay on;

    ...
}

server_tokens doesn’t speed up our performance any, but it turns off nginx version numbers on error pages, which is a good idea for security.

sendfile enables the use of sendfile(). sendfile() copies data between the disk and a TCP socket (or any two file descriptors). Pre-sendfile, to transfer such data we would alloc a data buffer in the user space. We would then read() to copy the data from a file in to the buffer, and write() the content of the buffer to a network. sendfile() reads the data immediately from the disk into the OS cache. Because this copying is done within the kernel, sendfile() is more efficient than the combination of read() and write() and the context switching/cache trashing that comes along with it (read more about sendfile)

已有 1 人翻译此段
我来翻译

tcp_nopush tells nginx to send all header files in one packet as opposed to one by one

tcp_nodelay tells nginx not to buffer data and send data in small, short bursts - it should only be set for applications that send frequent small bursts of information without getting an immediate response, where timely delivery of data is required

access_log off;
error_log /var/log/nginx/error.log crit;

access_log sets whether or not nginx will store access logs. Turning this off increases speed by reducing disk IO (aka, YOLO)

error_log tells nginx it should only log critical errors

keepalive_timeout 10;

client_header_timeout 10;
client_body_timeout 10;

reset_timedout_connection on;
send_timeout 10;

keepalive_timeout assigns the timeout for keep-alive connections with the client. The server will close connections after this time. We’ll set it low to keep our workers from being busy for too long.

client_header_timeout and client_body_timeout sets the timeout for the request header and request body (respectively). We’ll set this low too.

已有 1 人翻译此段
我来翻译

reset_timedout_connection tells nginx to close connection on non responding client. This will free up all memory associated with that client.

send_timeout specifies the response timeout to the client. This timeout does not apply to the entire transfer, but between two subsequent client-read operations. If the client has not read any data for this amount of time, then nginx shuts down the connection.

limit_conn_zone $binary_remote_addr zone=addr:5m;
limit_conn addr 100;

limit_conn_zone sets parameters for a shared memory zone that will keep states for various keys (such as current number of connections). 5m is 5 megabytes, and should be large enough to store (32k * 5) 32-byte states or (16k * 5) 64-byte states.

limit_conn sets the maximum allowed number of connections for a given key value. The key is addr, and our value is 100, so we’ll only allow 100 concurrent connections per IP address.

include /etc/nginx/mime.types;
default_type text/html;
charset UTF-8;

include is just a directive to include the contents of another file in the current file. Here, we use it to load in a list of MIME types to be used later.

default_type sets the default MIME-type to be used for files

charset sets the default charset to be included in our header

已有 1 人翻译此段
我来翻译

The performance improvement these two options give is explained in this great WebMasters StackExchange question.

gzip on;
gzip_disable "msie6";

# gzip_static on;
gzip_proxied any;
gzip_min_length 1000;
gzip_comp_level 4;

gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip tells nginx to gzip the data we’re sending. This will reduce the amount of data we need to send.

gzip_disable disables gzip for specific clients. We set it to be IE6 or less for compadibility issues.

gzip_static tells nginx to look for the pre-gzip’d asset with the same name before gzipping the asset itself. This is requires you to pre-zip your files (it’s commented out for this example), but allows you to use the highest compression possible and nginx no longer has to zip those files (read more about gzip_static here)

gzip_proxied allows or disallows compression of a response based on the request/response. We’ll set it to any, so we gzip all requests.

gzip_min_length sets the minimum number of bytes necessary for us to gzip data. If a request is under 1000 bytes, we won’t bother gzipping it, since gzipping does slow down the overall process of handling a request.

gzip_comp_level sets the compression level on our data. These levesls can be anywhere from 1-9, 9 being the slowest but most compressed. We’ll set it to 4, which is a good middle ground.

gzip_types sets the type of data to gzip. There are some above, but you can add more.

# cache informations about file descriptors, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=100000 inactive=20s; 
open_file_cache_valid 30s; 
open_file_cache_min_uses 2;
open_file_cache_errors on;

##
# Virtual Host Configs
# aka our settings for specific servers
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
已有 1 人翻译此段
我来翻译
open_file_cache both turns on cache activity and specifies the maximum number of entries in the cache, along with how long to cache them. We’ll set our maximum to a relatively high number, and we’ll get rid of them from the cache if they’re inactive for 20 seconds.

open_file_cache_valid specifies interval for when to check the validity of the information about the item in open_file_cache

open_file_cache_min_uses defines the minimum use number of a file within the time specified in the directive parameter inactive in open_file_cache

open_file_cache_errors specifies whether or not to cache errors when searching for a file

Include is again used to add some files to our config. We’re including our server modules, defined in a different file. If your server modules aren’t at these locations, you should modify this line to point at the correct location.

已有 1 人翻译此段
我来翻译

The full config file

user www-data;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 2048;
    multi_accept on;
    use epoll;
}

http {
    server_tokens off;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    access_log off;
    error_log /var/log/nginx/error.log crit;

    keepalive_timeout 10;
    client_header_timeout 10;
    client_body_timeout 10;
    reset_timedout_connection on;
    send_timeout 10;

    limit_conn_zone $binary_remote_addr zone=addr:5m;
    limit_conn addr 100;

    include /etc/nginx/mime.types;
    default_type text/html;
    charset UTF-8;

    gzip on;
    gzip_disable "msie6";
    gzip_proxied any;
    gzip_min_length 1000;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    open_file_cache max=100000 inactive=20s; 
    open_file_cache_valid 30s; 
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}
After editing your config, make sure to restart nginx in order to have it use our new configuration file

sudo service nginx restart

Takeaway

There we go! Your web server is now ready to do battle with the army of visitors that previously plagued you. This is by no means the only way you can go about speeding up your website. I’ll be writing more posts that explain other ways to speed up your website soon.

已有 1 人翻译此段
我来翻译
本文中的所有译文仅用于学习和交流目的,转载请务必注明文章译者、出处、和本文链接。
我们的翻译工作遵照 CC 协议,如果我们的工作有侵犯到您的权益,请及时联系我们。
加载中

评论(38)

iorichina
iorichina

Ϊë�ص�access_log

a
apple009
好文章
cnlinjie
cnlinjie
mark
赵小笨
赵小笨
mark
c
cufrancis
nginx可以用伪静态吗?没用过nginx,一直在用apache
root_root
root_root
照着这个把我生产环境的配置优化了下。。
UchihaRyuuzaki
UchihaRyuuzaki
果断收藏
小球球
小球球
果断收藏了。
mojie126
mojie126
mark
云端F
云端F
mark
返回顶部
顶部
返回顶部
顶部