From 62ce6e0c6acfe7be64754e12c63d980d3416c438 Mon Sep 17 00:00:00 2001 From: klzgrad Date: Sun, 18 Aug 2024 10:53:20 +0800 Subject: [PATCH] Updated Performance Tuning (markdown) --- Performance-Tuning.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/Performance-Tuning.md b/Performance-Tuning.md index 66f5fd4..3f8854a 100644 --- a/Performance-Tuning.md +++ b/Performance-Tuning.md @@ -4,7 +4,9 @@ BDP (byte) = Link speed (bit/s) * RTT (s) / 8 (bit/byte). Under the default effect of `net.ipv4.tcp_adv_win_scale`, BDP * 2 should be a good setting for maximum receive buffer size. -Example: Assuming 1Gbps link with 256ms RTT, it's a 32MiB window size requiring 64MiB buffer size. Add `net.ipv4.tcp_rmem = 4096 131072 67108864` to client-side sysctl.conf, `net.ipv4.tcp_wmem = 4096 131072 67108864` to server-side sysctl.conf. +Example: Assuming 1Gbps link with 256ms RTT, it's a 32MiB maximum window size requiring 64MiB maximum buffer size. Add `net.ipv4.tcp_rmem = 4096 131072 67108864` to client-side sysctl.conf, `net.ipv4.tcp_wmem = 4096 131072 67108864` to server-side sysctl.conf. + +BBR should be able to reduce the buffer size to reduce bufferbloat. * https://blog.cloudflare.com/the-story-of-one-latency-spike/ * https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency