Nginx 教程 #2:性能 已翻译 100%

oschina 投递于 2018/01/22 15:06 (共 19 段, 翻译完成于 01-26)
阅读 5822
收藏 132
6
加载中

Hello! Sharing is caring so we'd love to share another piece of knowledge with you. We prepared three-part series with the Nginx tutorial. If you already know something about Nginx or you'd just like to expand your experience and understanding - this is the perfect place for you!

We will tell you how Nginx works, what concepts are behind it, how could you optimize it to boost up your app's performance or how to set it up to have it up and running.

This tutorial will have three parts:

  • Basics concepts - where you get to know the difference between directive and context, inheritance model, and the order in which Nginx pick server blocks, and locations.

  • Performance - tips and tricks to improve speed. We will discuss here gzip, caching, buffers, and timeouts.

  • SSL setup - set up configuration to serve content through HTTPS.

We aimed to create a series in which you can easily find the proper configuration for a particular topic (like gzip, SSL, etc.), or simply read it all through. For the best learning experience, we suggest to set Nginx up on your own machine and try some practice.

已有 2 人翻译此段
我来翻译

tcp_nodelaytcp_nopush, and sendfile

tcp_nodelay

In the early days of TCP, engineers were facing a danger of congestion collapse. Quite a few solutions emerged as a prevention, and one of them was algorithm proposed by John Nagle.

Nagle’s algorithm aims to prevent being overwhelmed with a great number of small packages. It does not interfere with full-size TCP packages (Maximum Segment Size, or MSS in short). Only with packets that are smaller than MSS. Those packages will be transmitted only if receiver successfully sends back all the acknowledgments of previous packages (ACKs). And during waiting, the sender can buffer more data.

if package.size >= MSS.size
  send(package)
elsif acks.all_received?
  send(package)
else
  # acumulate data
end

已有 2 人翻译此段
我来翻译

During that time, another proposal emerged: the Delayed ACK.

In TCP communication we are sending data, and receiving acknowledgments (ACK) - which tells us that those data were delivered successfully.

Delayed ACK tries to resolve an issue where the wire is flooded by the massive number of ACK packages. To cut this number, the receiver will wait for some data to be sent back to the sender and include ACK packages with those data. If there is no data to be sent back, we have to send ACKs at least every 2 * MSS, or every 200 - 500 ms (in case we are no longer receiving packages).

if packages.any?
  send
elsif last_ack_send_more_than_2MSS_ago? || 200_ms_timer.finished?
  send
else
  # wait
end

已有 2 人翻译此段
我来翻译

As you may start noticing - this may lead to some temporary deadlocks on the persisted connection. Let's reproduce it!

Assumptions:

  • the initial congestion window is equal 2. The congestion window is part of another TCP mechanism, called Slow-Start. The details are not important right now, just keep in mind that it restricts how many packages can be sent at once. In first round-trip, we are allowed to send 2 MSS packages. In second: 4 MSS packages, in third: 8 MSS, and so on.

  • 4 buffered packages, waiting to be sent: A, B, C, D

  • A, B, C are MSS packages

  • D is a small package

Scenario:

  • Due to initial congestion window, the sender is allowed to transmit two packages: A and B.

  • Receiver, upon successful getting both packages, is sending an ACK.

  • The sender transmits C package. However, Nagle holds him from sending D (package is too small, wait for the ACK from C)

  • On receiver side, Delayed ACK holds him from sending ACK (which is sent every 2 packages or every 200 ms)

  • After 200ms, the receiver sends ACK for C package.

  • The sender receives ACK and sends D package.

已有 1 人翻译此段
我来翻译

During this exchange 200ms lag was introduced, due to deadlock between Nagel and Delayed ACK.

Nagle algorithm was a true savior in its time and still provides great value. However, in most cases we won’t need it for our website, thus it can be safely turned down via adding the flag TCP_NODELAY.

tcp_nodelay on;     # sets TCP_NODELAY flag, used on keep-alive connections

Enjoy your 200 ms gain!

To get some nitpicky details, I encourage reading this great paper.

已有 1 人翻译此段
我来翻译

sendfile

Normally, when a file needs to be sent, following steps are required:

  • malloc(3) - allocate a local buffer, for storing the object data

  • read(2) - retrieve and copy the object into the local buffer

  • write(2) - copy object from the local buffer into the socket buffer


This involves 2 context switches (read, write), and makes unnecessary, second copy of the same object. As you may see, it is not the optimal way. Thankfully there is another system call, that improves sending files, and it's called (surprise surprise): sendfile(2). This call retrieves an object to the file cache, and passes the pointers (without copying the whole object), straight to the socket descriptor. Netflix states, that using sendfile(2) increased network throughput from 6Gbps to 30Gbps.

However, sendfile(2) got some caveats:

  • does not work with UNIX sockets (e.g. when serving static files through your upstream server)

  • can perform differently, depending on the operating system (more here)


To turn this in nginx

sendfile on;

已有 2 人翻译此段
我来翻译

tcp_nopush

tcp_nopush is opposite to tcp_nodelay. Instead of pushing packages as fast as possible - it aims to optimize an amount of data sends at once.

It will force to wait for the packages to get it's maximum size (MSS), before sending it to the client. And this directive only works, when sendfile is on.

sendfile on;
tcp_nopush on;

It may appear that tcp_nopush and tcp_nodelay are mutually exclusive. But if all 3 directives are turned on, nginx will:
* ensure packages are full, before sending to the client
* for the last packet, tcp_nopush will be removed - allowing TCP to send immediately, without 200ms delay

已有 1 人翻译此段
我来翻译

How many processes should I have?

Worker processes

worker_process directive defines, how many workers should be run. By default, this value is set to 1. Safest setting is to use the number of cores by passing auto option.

But still, due to Nginx architecture, which handles requests blazingly fast - we probably won’t use more than 2 - 4 processes at a time (unless you are hosting Facebook, or doing some CPU intensive stuff inside nginx).

worker_process auto;

已有 1 人翻译此段
我来翻译

Worker connections

The directive that is directly tied with worker_process is worker_connections. It specifies how many connections at once can be opened by a worker process. This number includes all connections (e.g. connections with proxied servers), and not only connections with clients. Also, it is worth keeping in mind, that one client can open multiple connections, to fetch other resources simultaneously.

worker_connections 1024;

已有 1 人翻译此段
我来翻译

Open files limit

“Everything is a file” in Unix based systems. It means that documents, directories, pipes or even sockets are files. And the system has a limitation how many files can be opened at once by a process. To check the limits:

ulimit -Sn      # soft limit
ulimit -Sn      # hard limit

This system limit must be tweaked accordingly to worker_connections. Any incoming connection opens at least one file (usually two - connection socket and either backend connection socket or static file on disk). So it is safe to have this value equal to worker_connections * 2. Nginx, fortunately, provide an option for increasing this system value, withing nginx config. To do so, add worker_rlimit_nofile directive with proper number and reload the nginx.

worker_rlimit_nofile 2048;

Config

worker_process auto;
worker_rlimit_nofile 2048; # Changes the limit on the maximum number of open files (RLIMIT_NOFILE) for worker processes.
worker_connections 1024;   # Sets the maximum number of simultaneous connections that can be opened by a worker process.

已有 1 人翻译此段
我来翻译
本文中的所有译文仅用于学习和交流目的,转载请务必注明文章译者、出处、和本文链接。
我们的翻译工作遵照 CC 协议,如果我们的工作有侵犯到您的权益,请及时联系我们。
加载中

评论(3)

局

引用来自“bj我心飞翔”的评论

小编,第三篇关于ssl那篇文章呢
Nginx 教程 #3:SSL 设置 https://www.oschina.net/translate/nginx-tutorial-ssl-setup
bj我心飞翔
bj我心飞翔
小编,第三篇关于ssl那篇文章呢
局
Nginx 系列实用教程 #1:基本概念 https://www.oschina.net/translate/nginx-tutorial-basics-concepts
返回顶部
顶部