Lately I'm noticing pretty slow transfer speeds from fedoraproject.org. Problem shouldn't be on my end, I can download very fast from other servers - my rated connection speed is 300mbit/sec, it tests out at 320mbit/sec on my provider's speedtest, and I can definitely get upwards of 20mbyte/sec downloads from many other sites.
From fedoraproject lately I'm getting 1mbyte/sec or slower, trying to download ISOs from kojipkgs.fedoraproject.org. I tried getting them from the openqa server (openqa01.iad2.fedoraproject.org) as a dodge, but speeds are equally slow from there.
I used to get upwards of 5mbyte/sec at least, sometimes better, until quite recently. Not sure what changed.
We need a lot more info to try and debug this: 1. Can you supply an mtr from your place to the servers to see if there is something there 2. Can you supply an exact curl or wget to duplicate so we aren't misdiagnosing the problem
Thanks.
[adamw@adam rpm (master)]$ mtr -c50 -r kojipkgs.fedoraproject.org Start: 2020-09-01T13:49:00-0700 HOST: adam.happyassassin.net Loss% Snt Last Avg Best Wrst StDev 1.|-- OpenWrt.happyassassin.net 0.0% 50 0.3 0.3 0.2 0.4 0.0 2.|-- 50.64.64.1 0.0% 50 63.9 24.5 8.2 110.9 24.9 3.|-- rc1bb-be127-1.vc.shawcabl 0.0% 50 36.6 25.7 9.7 126.3 28.9 4.|-- rc3so-be6-1.cg.shawcable. 0.0% 50 23.2 34.0 17.3 168.7 28.4 5.|-- rc4ec-be13.il.shawcable.n 0.0% 50 57.0 60.5 49.7 186.4 23.9 6.|-- be4225.ccr41.ord03.atlas. 0.0% 50 59.7 74.6 58.2 197.2 30.2 7.|-- be2766.ccr42.ord01.atlas. 2.0% 50 61.7 71.4 58.1 215.8 29.4 8.|-- be2718.ccr22.cle04.atlas. 0.0% 50 68.9 80.0 65.1 218.3 29.0 9.|-- be2892.ccr42.dca01.atlas. 0.0% 50 92.0 93.8 76.6 188.6 25.4 10.|-- be3084.ccr41.iad02.atlas. 0.0% 50 177.4 99.6 78.3 225.5 37.1 11.|-- 38.32.106.90 0.0% 50 121.7 106.7 77.7 234.6 44.7 12.|-- vrrp2 0.0% 50 81.6 110.9 79.2 246.0 43.5 13.|-- proxy-iad02.fedoraproject 0.0% 50 226.8 113.2 79.5 226.8 43.1 [adamw@adam rpm (master)]$
and:
[adamw@adam nightlies]$ wget https://kojipkgs.fedoraproject.org/compose/rawhide/Fedora-Rawhide-20200831.n.0/compose/Server/x86_64/iso/Fedora-Server-dvd-x86_64-Rawhide-20200831.n.0.iso
is currently churning away at 500KB/sec.
here's the cut-off server names:
rc1bb-be127-1.vc.shawcable.net rc3so-be6-1.cg.shawcable.net rc4ec-be13.il.shawcable.net be4225.ccr41.ord03.atlas.cogentco.com be2766.ccr42.ord01.atlas.cogentco.com be2718.ccr22.cle04.atlas.cogentco.com be2892.ccr42.dca01.atlas.cogentco.com be3084.ccr41.iad02.atlas.cogentco.com proxy-iad02.fedoraproject.org
Ok thanks. going from the interface data and iftop, the proxies are each averaging 600 Mbit/second thought it is a bit bursty. Most of the traffic is internal to and from koji to other systems in the dc.
Using that wget, I am getting 2 MB/s from home
2020-09-01 17:12:24 (2.05 MB/s) - ‘Fedora-Server-dvd-x86_64-Rawhide-20200831.n.0.iso’ saved [2131755008/2131755008]
Doing the same pull from a west coast location, I am getting 900 kb/sec and it looks like it is the same route as you have with the same cogentco links. Going to a datacentre in Ohio I am getiting 10 MB/s and from a datacentre in Germany I am getting 4MB/s
So I am not sure what is going on.. my guess because the cogentco links seems to be the same it is somewhere in there. However how or why I don't know. I think I need input/review from others on this.
Is 2MB/sec your home connection's limit? If not even that seems slow, if we can manage 4MB/sec trans-Atlantic...
It is normal for what I get from various datacenters for long downloads. Yes I have 100 mbit to the house but that is a burst on a cable line.. get people in the neighborhood watching movies, doing work from home etc and it gets limited. Datacenter connections don't usually have to share to the whims of whoever is bittorrenting the complete Game of Thrones in my area.
ah, OK. Mine is cable too, but for instance wget http://mirror.arizona.edu/fedora/linux/releases/32/Server/x86_64/iso/Fedora-Server-dvd-x86_64-32-1.6.iso gets me sustained 10-20MB/sec.
wget http://mirror.arizona.edu/fedora/linux/releases/32/Server/x86_64/iso/Fedora-Server-dvd-x86_64-32-1.6.iso
is the traceroute the same or if not where does it differ in the path to arizona.edu. The connections you are going through are going to be the biggest bottlenecks as much as local networklinks. Looking at the traceroute going from the datacentres with high bandwidth, they do not go on cogent networks... while your and mine to the DC go through cogent to it.
Side note I am contacting the IAD site network admins to see if there is anything known on their side or know with cogent.
yeah, indeed arizona doesn't go via cogent. It goes via Vancouver and Calgary on Shaw, then six.tr-cps.internet2.edu, then a bunch of ???, then et-8-0-0.1020.rtsw.tucs.net.internet2.edu, 198.71.47.177, Crocubot-100G-0-2-0-1-20.telcom.arizona.edu and another ???.
six.tr-cps.internet2.edu
???
et-8-0-0.1020.rtsw.tucs.net.internet2.edu
198.71.47.177
Crocubot-100G-0-2-0-1-20.telcom.arizona.edu
IT looked and the cogent link is heavily utilized at near maximum traffic so we are filling the tubes with kittens (I mean packages). The other network is not getting utilized as much which is why the downloads not going over cogent go faster.
Part of the upcoming projects for Q4 are to put in better metering so we can see what services on our part are using the 1.5Gbps we have.
ok so now I have to figure out how to HACK THE INTERNET so my kittens don't go via cogeco?
brb getting my hacker hoodie
Metadata Update from @smooge: - Issue assigned to smooge - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: high-trouble, medium-gain, ops
Currently Fedora has 2 BGP balanced 2 gbit/s links to the Internet from IAD2. One goes through cogent and the other goes through another provider. If your ISP's BGP route decides that cogent is the way you are to get to Fedora, that is pretty much how you and it looks like every other home internet user will go. This means that link is fairly saturated and transfers are going to be slower because of that.
At the moment, there is nothing we can do about this other than start finding out what services are using the most bandwidth and making sure none of them are overwhelming the others. This will probably roll out as a long term statistics project.
Metadata Update from @smooge: - Issue close_status updated to: Will Not/Can Not fix - Issue status updated to: Closed (was: Open)
Log in to comment on this ticket.