mirror of
https://github.com/klzgrad/naiveproxy.git
synced 2025-02-23 18:33:22 +03:00
Updated Performance Tuning (markdown)
parent
8b377b7310
commit
14e53a17d8
@ -2,15 +2,18 @@
|
|||||||
|
|
||||||
### Window sizes for large bandwidth-delay links
|
### Window sizes for large bandwidth-delay links
|
||||||
|
|
||||||
BDP (byte) = Link speed (bit/s) * RTT (s) / 8 (bit/byte). Under the default effect of `net.ipv4.tcp_adv_win_scale`, BDP * 2 should be a good setting for maximum receive buffer size.
|
The TCP window limits throughput over high latency links (throughput <= window size / RTT). In Linux kernel, the window sizes are auto-tuned by the congestion control algorithm and limited by the maximum window sizes. The maximum window sizes are calculated from `tcp_rmem`, `tcp_wmem`, and `tcp_adv_win_scale`. Under the default effect of `tcp_adv_win_scale`, receive window = `tcp_rmem` / 2.
|
||||||
|
|
||||||
Example: Assuming 1Gbps link with 256ms RTT, it's a 32MiB maximum window size requiring 64MiB maximum buffer size. Add to sysctl.conf:
|
The default values of `tcp_rmem` (max) and `tcp_wmem` (max) are calculated from RAM size and always smaller than 6MB and 4MB. Thus the default maximum receive and send windows are 4MB and 2MB, which are enough for most cases but too small for fat links with e.g. 1Gbps bandwidth and >16ms RTT.
|
||||||
|
|
||||||
|
The window sizes should be tuned to the actual BDP = Link speed * RTT. Example: Assuming 1Gbps link with 256ms RTT, it's a 32MiB maximum window size requiring 64MiB maximum buffer size. Add to sysctl.conf:
|
||||||
* (Client only) `net.ipv4.tcp_rmem = 4096 131072 67108864`
|
* (Client only) `net.ipv4.tcp_rmem = 4096 131072 67108864`
|
||||||
* (Server only) `net.ipv4.tcp_wmem = 4096 131072 67108864`.
|
* (Server only) `net.ipv4.tcp_wmem = 4096 131072 67108864`
|
||||||
|
|
||||||
BBR should be able to auto-tune the window size to reduce bufferbloat. Assuming large download and small upload, client-side `net.ipv4.tcp_wmem` and server-side `net.ipv4.tcp_rmem` can be left as default. `net.core.rmem_max` and `net.core.wmem_max` are not used in window size auto-tuning and can be left as default.
|
BBR should be able to auto-tune the window size to reduce bufferbloat. Assuming large download and small upload, client-side `net.ipv4.tcp_wmem` and server-side `net.ipv4.tcp_rmem` can be left as default. `net.core.rmem_max` and `net.core.wmem_max` limits manual buffer settings and are not used in window size auto-tuning and can be left as default.
|
||||||
|
|
||||||
* https://blog.cloudflare.com/the-story-of-one-latency-spike/ tells us large buffer sizes are not always better.
|
* https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
|
||||||
|
* https://blog.cloudflare.com/the-story-of-one-latency-spike/ shows large buffer sizes are not always better.
|
||||||
* https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency
|
* https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency
|
||||||
|
|
||||||
### Use BBR congestion control
|
### Use BBR congestion control
|
||||||
|
Loading…
x
Reference in New Issue
Block a user