Netty NIO 框架性能压测 – 长链接

红薯 发布于 2010/05/25 09:11
阅读 18K+
收藏 24
压测准备
  1. 需要将ulimit -n 改大,否则nio链接开不大。
  2. 准备4台机器(1台netty服务器,3台压测机)
  3. 使用apache的ab做压测工具
开始干活

压测代码:

package org.dueam.sample.netty;
package org.dueam.sample.netty;
 
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.Executors;
 
import org.jboss.netty.bootstrap.ServerBootstrap;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.buffer.DynamicChannelBuffer;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelFactory;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelStateEvent;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelHandler;
import org.jboss.netty.channel.ChannelHandler.Sharable;
import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
 
public class ChatServer {
 
public static void main(String[] args) throws Exception {
if(args.length <1){
args = new String[]{"9876","true"};
}
ChannelFactory factory = new NioServerSocketChannelFactory(Executors
.newCachedThreadPool(), Executors.newCachedThreadPool());
 
ServerBootstrap bootstrap = new ServerBootstrap(factory);
 
ChatServerHandler handler = new ChatServerHandler();
ChannelPipeline pipeline = bootstrap.getPipeline();
pipeline.addLast("chat", handler);
 
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
int port = Integer.valueOf(args[0]);
bootstrap.bind(new InetSocketAddress(port));
 
boolean fillChat = "true".equals(args[1]);
if (fillChat) {
ChannelManagerThread cmt = new ChannelManagerThread();
cmt.start();
}
 
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
while (true) {
String command = br.readLine();
if ("dump".equals(command)) {
System.out.println("当前活着的数量:" + channel.size());
} else if ("help".equals(command)) {
System.out.println("命令列表:");
System.out.println("dump:打印当前情况");
System.out.println("help:帮助文档");
}
}
 
}
final static Random random = new Random();
static int max = 0;
static class ChannelManagerThread extends Thread {
@Override
public void run() {
while (true) {
try {
if(max < channel.size()){
max = channel.size() ;
System.out.println("live:"+channel.size());
}
 
for (Channel s : channel.values()) {
if (random.nextInt(100)>70) {
ChannelBuffer cb = new DynamicChannelBuffer(256);
cb.writeBytes("Hey!有人来找你了!".getBytes());
s.write(cb);
}
}
sleep(500);
} catch (InterruptedException e) {
 
}
}
}
}
 
final static Map<Integer, Channel> channel = new HashMap<Integer, Channel>();
 
static void log(String message) {
System.out.println(message);
}
 
@Sharable
static class ChatServerHandler extends SimpleChannelHandler {
@Override
public void channelConnected(ChannelHandlerContext ctx,
ChannelStateEvent e) {
Channel ch = e.getChannel();
ChannelBuffer cb = new DynamicChannelBuffer(256);
cb.writeBytes("Hell!你来了啊!".getBytes());
ch.write(cb);
channel.put(e.getChannel().getId(), e.getChannel());
}
 
 
@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
}
 
@Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
channel.remove(e.getChannel().getId());
log("remove channel by exception! id:" + e.getChannel().getId());
 
e.getChannel().close();
}
 
@Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e) throws Exception {
channel.remove(e.getChannel().getId());
log("remove channel by exception! id:" + e.getChannel().getId());
 
}
}
}

压测方式:

#加大超时和并发量,并使用keep-alive的方式保持住端口
./ab -n 20000 -c 20000 -k -t 999999999 -r http://192.168.216.30:9876/
压测结果

内存损耗:

[root@cap216030 ~]# free -k -t -s 10
-- 原始内存
total used free shared buffers cached
Mem: 4149076 189828 3959248 0 13196 95484
-/+ buffers/cache: 81148 4067928
Swap: 2096472 208 2096264
Total: 6245548 190036 6055512
 
-- 执行 chat server之后
total used free shared buffers cached
Mem: 4149076 207236 3941840 0 13216 96244
-/+ buffers/cache: 97776 4051300
Swap: 2096472 208 2096264
Total: 6245548 207444 6038104
 
-- 59471 个nio连接之后
total used free shared buffers cached
Mem: 4149076 474244 3674832 0 13328 96132
-/+ buffers/cache: 364784 3784292
Swap: 2096472 208 2096264
Total: 6245548 474452 5771096

结论:

  1. Netty nio 可以轻松将链接开到6W,每个线程大概损坏5k左右的系统内存
后续压测方案
  1. 编写Java客户端做内容实时双向推送
  2. 使用100台机器每台机器起1000个线程来模拟客户端进行压测

原文转自 http://dueam.org/

加载中
0
王全
王全

红薯,ab压测结果贴出来看下!

0
红薯
红薯

引用来自#2楼“王全”的帖子

红薯,ab压测结果贴出来看下!

8好意思,转贴的:)

0
JavaGG
JavaGG

不知和mina,cindy等等这些比如何呢

0
钛元素
钛元素

我启用了tomcat中的nio,怎么感觉都没有什么呢?

0
红薯
红薯

引用来自#5楼“钛元素”的帖子

我启用了tomcat中的nio,怎么感觉都没有什么呢?

只有在访问量大的时候才会有效果,你可以试着对 Tomcat 进行压力测试,测试一些静态的 jsp 页面,不要跟数据库打交道。

0
欧德高
欧德高

nio在大量连接且活动连接少的情况下才能体现出高效,在高速网络不一定有优势

0
宋威
宋威

有没有人用netty开发服务器端,然后让android连接服务器端。 我们的项目也是长连接的一个BS项目,之前用传统的socket开发的,结果性能不行,并发效率不高。希望大哥们发我个服务器端的NETTY框架例子给我. 我的QQ:174497550

陌上草
正在开发类似应用~
0
邓小峰
邓小峰

引用来自“G.Q.F”的答案

nio在大量连接且活动连接少的情况下才能体现出高效,在高速网络不一定有优势

错得离谱,网络越快,NIO的效率越高

理论上,NIO的吞吐没有极限,你说它有多大吞吐,它就有多大吞吐,实际上不可能。NIO的吞吐的计算公式可以这样简单理解

假设机器能支持100个线程,网络延迟是1ms

100*1000/1=10万

如果网络延迟是0.1

那句是

100*1000/1=100万

能挂多少链接,就看你有多少seletor,并且这些seletor还run得动,nio网络速度越好,吞吐越快,如果又不存在tcp分包情况下,在server端的网络延迟理论上是0

所以极限速度是

100*1000/0=无穷大

 

实际不可能,为什么呢?因为即使你有那么快的速度,但是这时候cpu大量消耗在无穷无尽的网络io中断上,所以那些测试数据是几十万的,上百万的,都是用了多网卡,并且对io中断的cpu消耗做了在多核上的优化

 

0
邓小峰
邓小峰
还有gc、你自己seletor循环啊,copy数据这些cpu消耗,实际上不是不可能无穷大。就是因为nio从理论上是存在这个原理得到的计算公式,所以经常有人做了什么优化改善,又刷新了记录
0
dhmj2ee
dhmj2ee

引用来自“红薯”的评论

引用来自#2楼“王全”的帖子

红薯,ab压测结果贴出来看下!

8好意思,转贴的:)

[michael@michael :jdk1.8.0_91]$ab -n 20000 -c 20000 -k -t 999999999 -r http://localhost:9090/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/


Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests




Server Software:        
Server Hostname:        localhost
Server Port:            9090


Document Path:          /
Document Length:        0 bytes


Concurrency Level:      20000
Time taken for tests:   93.800 seconds
Complete requests:      50000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      0 bytes
HTML transferred:       0 bytes
Requests per second:    533.05 [#/sec] (mean)
Time per request:       37520.132 [ms] (mean)
Time per request:       1.876 [ms] (mean, across all concurrent requests)
Transfer rate:          0.00 [Kbytes/sec] received


Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  338 797.2      0    7016
Processing: 29635 31027 1079.3  30685   34330
Waiting:        0    0   0.0      0       0
Total:      30002 31365 1589.9  30709   37959


Percentage of the requests served within a certain time (ms)
  50%  30709
  66%  30983
  75%  32072
  80%  32679
  90%  34119
  95%  34893
  98%  35289
  99%  35455
 100%  37959 (longest request)

返回顶部
顶部