Posts: 7
Threads: 2
Joined: Jan 2016
Reputation:
0
I switched to tcp_BBR ( developed by google) and FQ with pacing
modprobe sch_fq
tc qdisc add dev (interface) root fq
tc qdisc add dev (interface) root fq maxrate Ngbit
runs even more stable...
It's nice to see that I'm not alone. I like it how BBR works and with a bit of tuning it could make a connection really slick.
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
20-02-2017, 12:41 PM
(This post was last modified: 14-01-2018, 02:29 AM by tropic.
Edit Reason: important typo fixing
)
TCP_BBR seems faster and stabler than Westwood+ at bottleneck scenarios, but it has three main disadvantages imho: the first one is the agresiveness of its congestion method, the second is the increased latency measurements, and finally the third is the qdisc FQ 'requirement' to help at the background (probably better if ECN/ACK were enabled at the same time). Westwood+ works usually fine with or without fq_codel/ECN/ACK and it's stronger to any kind of 'network' sysctl.conf value misconfiguration... Anyway, beyond all these short considerations, if you are looking for network latency and glenteness control, the choice shoud be Westwood+. On the other side, if you just wast want speed at all cost, TCP_BBR sounds pretty good. I added this idea for development for Xan's valuation. Thank you both!
http://blog.cerowrt.org/post/bbrs_basic_beauty/
http://intronetworks.cs.luc.edu/current/...wtcps.html
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
20-02-2017, 02:00 PM
(This post was last modified: 14-01-2018, 02:28 AM by tropic.
Edit Reason: important typo fixing
)
(20-02-2017, 12:41 PM)tropic Wrote: TCP_BBR seems faster and stabler than Westwood+ at bottleneck scenarios, but it has three main disadvantages imho: the first one is the agresiveness of its congestion method, the second is the increased latency measurements, and finally the third is the qdisc FQ 'requirement' to help at the background (probably better if ECN/ACK were enabled at the same time). Westwood+ works usually fine with or without fq_codel/ECN/ACK and it's stronger to any kind of 'network' sysctl.conf value misconfiguration. Anyway, beyond all these short considerations, if you are looking for network latency and glenteness control, the choice shoud be Westwood+. On the other side, if you just wast want speed at all cost, TCP_BBR sounds pretty good. I added this idea for development for Xan's valuation. Thank you both!
http://blog.cerowrt.org/post/bbrs_basic_beauty/
http://intronetworks.cs.luc.edu/current/...wtcps.html
Any „hands on“ experiences or only given URL's?
Posts: 178
Threads: 6
Joined: Jul 2016
Reputation:
2
Thanked by fnord and guest suggestions. BBR really shows less oscillation under high transfer rates. But Westwood+ has lower latencies and more aggressiveness, just as tropic also point out. Well, I'm testing TCP_BBR on day-to-day use now.
Very grateful!
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
20-02-2017, 07:59 PM
(This post was last modified: 14-01-2018, 09:48 PM by tropic.
Edit Reason: edited important typo
)
(20-02-2017, 02:00 PM)Guest Wrote: (20-02-2017, 12:41 PM)tropic Wrote: TCP_BBR seems faster and stabler than Westwood+ (...)
Any „hands on“ experiences or only given URL's?
Please, note that I said "seems", because current development research from Google only claims that TCP_BBR is faster and stabler than "CUBIC". However, I don't really care about this 'main achievement' because Westwood+, YeAH and Vegas are faster and stabler than CUBIC too, according to my long experience testing them in the past. IMHO, mainly for online gaming and IRC, Westwood+ still rules for me because I just need the best latency and balanced fairness. Also, you can see how "strong" is Westwood+:
Code:
* downloading file size = 2500 Mb.
- Plain Westwood maximum peak observed in the first downloading minute: 22.56 Mb/s.
- BBR+FQ+ECN+ACK max. peak observed in the first downloading minute: 17.05 Mb/s.
Edit: online gaming rarely uses ACK -- also I bet that tcp_bbr goes directly to Android and servers.
(20-02-2017, 03:38 PM)Xan Wrote: Thanked by fnord and guest suggestions. BBR really shows less oscillation under high transfer rates. But Westwood+ has lower latencies and more aggressiveness, just as tropic also point out. Well, I'm testing TCP_BBR on day-to-day use now.
Very grateful! 
Thank you very much, Xan! I have not too much free time now to help you more!
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
(19-02-2017, 01:34 PM)Guest Wrote: It's nice to see that I'm not alone. I like it how BBR works and with a bit of tuning it could make a connection really slick.
Please, may you explain the "bit of tuning" required? It will probably help for future testing!
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Can we at least add BBR as an option please? I currently only see two options with XanMod which are westwood or reno. Google cloud is now using it. https://cloudplatform.googleblog.com/201...aster.html
Posts: 178
Threads: 6
Joined: Jul 2016
Reputation:
2
Available TCP Congestions:
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=m
CONFIG_TCP_CONG_WESTWOOD=y
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
CONFIG_TCP_CONG_CDG=m
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_WESTWOOD=y
For BBR:
Code:
echo bbr > /proc/sys/net/ipv4/tcp_congestion_control
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
25-08-2017, 06:26 PM
(This post was last modified: 25-08-2017, 06:27 PM by tropic.)
@guest
At least for me, if multiple downloading/uploading tasks are required, Westwood+ is still the best TCP congestion control ever.
No matter what Google says.
@Xan
+1
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Try to increase:
net.ipv4.tcp_rmem = 2147483647 2147483647 2147483647
net.ipv4.tcp_wmem = 4194304 10000000 2147483647
and redo your test BBR will blow anything out of the water.
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
12-01-2018, 10:17 PM
(This post was last modified: 15-01-2018, 05:11 AM by tropic.)
(12-01-2018, 08:09 PM)Guest Wrote: Try to increase:
net.ipv4.tcp_rmem = 2147483647 2147483647 2147483647
net.ipv4.tcp_wmem = 4194304 10000000 2147483647
and redo your test BBR will blow anything out of the water.
LOL up to 2 Gb buffers? Not for the average users IMHO. Anyway with those high values everything should fly with independence of the range of values set at sysctl.conf file. After minimal testing with the above values, results are mostly equal for htcp = bbr = westwood, some tcp ran out memory observed. I tested widely in the past all the net.ipv4 stuff available, anyway at present I prefer not to tweak certain values due the extense range of machines and configurations observed. There is no one magic configuration for all.
Westwood still rules for the average user/machine. However, some people recommends bbr+fq (still not tested by me):
Code:
# to set permanently, add these lines to sysctl.conf file then reboot:
net.ipv4.tcp_congestion_control=bbr
net.core.default_qdisc=fq
# above settings are examples, testings are being done
https://forum.xanmod.org/thread-152-post...ml#pid2068
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Posts: 178
Threads: 6
Joined: Jul 2016
Reputation:
2
Tropic and all,
I need more benchmarks of other combinations like:
queueing/scheduling + tcp congestions algorithms
fq_codel and fq + westwood, bbr, illinois, ...
Thanks
Posts: 178
Threads: 6
Joined: Jul 2016
Reputation:
2
All available net sched:
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_HHF=m
CONFIG_NET_SCH_PIE=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
13-01-2018, 02:41 AM
(This post was last modified: 15-01-2018, 05:11 AM by tropic.)
LOL how big number of combinations!
Alexander, I have very little free time due to major responsibilities, however after doing all my commons tasks online (e.g. email, browsing, upload and download files, updating Ubuntu, gaming, all together in a session), I must admit that BBR+FQ seems to be an alternative choice for XanMod recent PDS kernels. I hope @CybDex will perform some kind of test with a good machine that make some light over this point.
Code:
$ ls /lib/modules/`uname -r`/kernel/net/ipv4/
$ cat /proc/sys/net/ipv4/tcp_congestion_control
$ tc qdisc show
Alexander, do have a look to this, some TCP tunings and comments about XanMod:
https://www.reddit.com/r/seedboxes/comme...ce_tweaks/
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Posts: 20
Threads: 2
Joined: Jan 2018
Reputation:
0
14-01-2018, 02:48 AM
(This post was last modified: 14-01-2018, 02:58 AM by neoark.)
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_FQ=m
what is the difference between fq and fq_codel?
It seems like fq_codel can't be used with bbr for kernel below 4.13.
https://groups.google.com/forum/#!topic/...jL4ropdOV8
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
14-01-2018, 03:22 AM
(This post was last modified: 15-01-2018, 01:48 AM by tropic.
Edit Reason: edited as requested by my friends
)
I have found a while of free time and I have done a few 'testing' tasks with these conclusions:
- If you really need the best latency control, probably you should stay with Westwood+fq_codel (still testing).
- If you need sustained speed at all cost on a very good connection, probably you should use BBR+FQ (still testing).
- If your connection is Wireless, just try first with Westwood+fq_codel to see if it's enough, then BBR+FQ (still testing).
I don't know what level of placebo effect can be experienced while testing tcp congestion control and qdisc... However, after few testing, the differences for uploading/downloading were only around 1% for BBR+FQ and not in all scenarios. IMHO, at least for my machine and speed available, I still prefer Westwood+fq_codel (still testing). Why? Easy, while I was playing billar online I experienced two 'micro freezings' with BBR+FQ. I don't care about 1, 2, 3 or whatever percent of improvement if I can't play online fluidly in a single game like that. By the way, if you are not sharing your WiFi connection you probably will feel better with BBR+FQ (remember the evil placebo effect), but if you are sharing your WiFi hotspot or you are at a free WiFi with some people eating the bandwidth at your side, then you probably will prefer Westwood+fq_codel, because fq_codel is better known q_disc for this fact than FQ q_disc (will be tested by me as soon as possible).
That's all. Google can say whatever but if you have an average connection just try first with Westwood+fq_codel, then try BBR+ FQ (still testing).
(14-01-2018, 02:48 AM)neoark Wrote: CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_FQ=m
what is the difference between fq and fq_codel?
It seems like fq_codel can't be used with bbr for kernel below 4.13.
https://groups.google.com/forum/#!topic/...jL4ropdOV8
In few words: codel limits buffer bloating and minimize latency while fq_codel fair queues for better distribution of the bandwidth. In other words, IMHO fq_codel is the best of the classless qdisc for the average user. I haven't seen you before, I think, so please have a warm welcome to the forum!
Edited as requested by my friends in order to not create unnecessary controversy against BBR tcp congestion control protocol.
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Posts: 20
Threads: 2
Joined: Jan 2018
Reputation:
0
14-01-2018, 05:07 PM
(This post was last modified: 14-01-2018, 05:16 PM by neoark.)
Thanks! I agree westwood and illinois are good. But BBR seem to give consistent bandwidth tested it EU -> US. I also have noticed some micro freezes with BBR not sure what the issue is. I set ECN =2, TSO = off and increased rx_users =30 seems to help bit. But its not suited for all cases. fq_codel + BBR on 4.14 kernel seems faster than fq. I will have to run some more tests.
Just came accross this for kernel above 4.13 there is no need for fq for bbr.
Source:
https://git.kernel.org/pub/scm/linux/ker...4dfe5e4123
https://github.com/google/bbr/blob/maste...k-start.md
Will have to test it without fq.
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
14-01-2018, 09:45 PM
(This post was last modified: 14-01-2018, 10:02 PM by tropic.)
(14-01-2018, 05:07 PM)neoark Wrote: Thanks! I agree westwood and illinois are good. But BBR seem to give consistent bandwidth tested it EU -> US. I also have noticed some micro freezes with BBR not sure what the issue is. I set ECN =2, TSO = off and increased rx_users =30 seems to help bit. But its not suited for all cases. fq_codel + BBR on 4.14 kernel seems faster than fq. I will have to run some more tests.
Just came accross this for kernel above 4.13 there is no need for fq for bbr.
Source:
https://git.kernel.org/pub/scm/linux/ker...4dfe5e4123
https://github.com/google/bbr/blob/maste...k-start.md
Will have to test it without fq.
I am becoming mad with the BBR+FQ micro freezes while playing billar, trying to find the cause.
All the other internet experience seems better, however this "minor issue" is really annoying.
Added your suggested settings but issues still persists. Will test tonight fq_codel if have free time.
Thanks for your help and ideas.
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
Posts: 1,490
Threads: 43
Joined: Dec 2015
Reputation:
4
15-01-2018, 01:50 AM
(This post was last modified: 17-01-2018, 05:28 AM by tropic.)
Well, finally I got solved the weird behaviour with the BBR tcp congestion protocol and micro freezings:
- the key was the erratic performance from FQ q_disc, everything seems to be fixed now with BBR+fq_codel
"TCP BBR is in Linux v4.9 and beyond. However, we recommend compiling from the latest sources, from the networking development branch. In particular, in the davem/net-next networking development branch (and Linux v4.13-rc1 and beyond) there is new support for https://git.kernel.org/pub/scm/linux/ker...4dfe5e4123. This means that there is no longer a requirement to install the "fq" qdisc to use BBR. Any qdisc will do."
https://github.com/google/bbr/blob/maste...k-start.md
So there shouldn't be no further problems to use any q_disc with BBR, as no longer required FQ.
Some testing should be done with BBR+FQ_CODEL for other machines if applied.
Thank you @neoark for your help!
Code:
# list ot tcp_congestion control protocols added to xanmod:
$ ls /lib/modules/`uname -r`/kernel/net/ipv4/
# to set permanently, add these lines to sysctl.conf file then reboot:
# these are examples about how to change whatever tcp or q_disc
net.ipv4.tcp_congestion_control=bbr
net.core.default_qdisc=fq_codel
"(...) the grandest occasion the past or present has seen, or the future can hope to see." -- Cervantes.
|