concurrency - Rust vs Go 并发网络服务器,为什么这里的 Rust 慢?
问题描述
我正在尝试对 Rust 书中的多线程网络服务器示例进行一些基准测试,为了进行比较,我在 Go 中构建了类似的东西并使用 ApacheBench 运行了基准测试。虽然这是一个简单的例子,但差异太大了。Go Web 服务器做同样的事情要快 10 倍。由于我期望 Rust 更快或处于相同水平,因此我尝试使用 futures 和 smol 进行多次修订(尽管我的目标是仅使用标准库比较实现)但结果几乎相同。这里的任何人都可以建议对 Rust 实现进行更改以使其在不使用大量线程数的情况下更快吗?
这是我使用的代码:https ://github.com/deepu105/concurrency-benchmarks
tokio-http 版本最慢,其他 3 个 rust 版本的结果几乎相同
以下是基准:
Rust(有 8 个线程,有 100 个线程,数字更接近 Go):
❯ ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 176 bytes
Concurrency Level: 100
Time taken for tests: 26.027 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 195000 bytes
HTML transferred: 176000 bytes
Requests per second: 38.42 [#/sec] (mean)
Time per request: 2602.703 [ms] (mean)
Time per request: 26.027 [ms] (mean, across all concurrent requests)
Transfer rate: 7.32 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 2.9 1 16
Processing: 4 2304 1082.5 2001 5996
Waiting: 0 2303 1082.7 2001 5996
Total: 4 2307 1082.1 2002 5997
Percentage of the requests served within a certain time (ms)
50% 2002
66% 2008
75% 2018
80% 3984
90% 3997
95% 4002
98% 4005
99% 5983
100% 5997 (longest request)
去:
ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 174 bytes
Concurrency Level: 100
Time taken for tests: 2.102 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 291000 bytes
HTML transferred: 174000 bytes
Requests per second: 475.84 [#/sec] (mean)
Time per request: 210.156 [ms] (mean)
Time per request: 2.102 [ms] (mean, across all concurrent requests)
Transfer rate: 135.22 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 1.4 2 5
Processing: 0 203 599.8 3 2008
Waiting: 0 202 600.0 2 2008
Total: 0 205 599.8 5 2013
Percentage of the requests served within a certain time (ms)
50% 5
66% 7
75% 8
80% 8
90% 2000
95% 2003
98% 2005
99% 2010
100% 2013 (longest request)
解决方案
我只比较了你的“rustws”和 Go 版本。在 Go 中,你有无限的 goroutines(即使你把它们都限制在一个 CPU 内核上),而在 rustws 中,你创建一个有 8 个线程的线程池。
由于您的请求处理程序每 10 个请求休眠 2 秒,因此您将 rustws 版本限制为每秒 80/2 = 40 个请求,这就是您在 ab 结果中看到的。Go 不会受到这种任意瓶颈的影响,因此它会向您显示单个 CPU 内核上的最大处理能力。
推荐阅读
- python - 使用列表推导创建使用 dict 值放在一起的字符串列表
- python - 如何通过命令提示符运行多个 python 文件
- python - 在 SymPy 中使用欧拉公式扩展 exp(I*theta)
- r - Modifying Strings to transform 1 into 01
- javascript - 我的扫雷游戏中添加的“地雷”数量不一致,我知道错误是什么但无法修复
- python - 为什么我的代码不能继续我想要的?
- linux - 在 bash 脚本的变量中收集外部脚本输出
- reactjs - 尝试从网站注销时 Spring Security 出错
- docker - 使用 pyodbc 将 Docker 映像与主机上的 Oracle 数据库连接
- java - 编译为 APK 时 FXML 崩溃