> 2.5 Gbit/s PoE to upstream switch
Can anybody explain to me why these supposedly premier networking devices are lacking so much in bandwidth? I get it that mmWave is really only ever realistically going to hit 2.5G over the air, but is there any reason why they're not willing to provide at least 10G copper, or an actual SFP port? Hell, even Macs support 10G these days. I never understood this. Do they mean 2 Gbps downlink per client, or per device in total? If it's the former, 2.5G wired seems like a major bottleneck to any serious consumption.
If a single client at 2 Gbps is all the promise of 5G amounted to, well, it would be disappointing to say the least.
Portability and heat. You can get a small USB 2.5G adapter that produces negligible heat, but a Thunderbolt 10G adapter is large and produces a substantial amount of heat.
I use 10G at home, but the adapter I throw into my laptop bag is a tiny 2.5G adapter.
The better reason to put a 10G transceiver in this would be that some (cheap, honestly garbage) SFP+ transceivers can’t negotiate anything between 1G and 10G. But I’ve only seen that on bargain-bin hardware so I don’t know that they should be designing products around it.
For 5Gbps and higher, you'll need another PCIE line - and SOHO motherboards are usually already pretty tight on PCIE lanes.
10GbE will require 4x3.0 lanes
10Gb interfaces also tend to run quite hot and be a bit power hungry.
This is a device that needs to be in a location with good 5G reception, so it makes sense to be PoE powered so you can put it near a window or in the location that gets the best reception, and only run a long ethernet cable. And, although I don't like it too much, 2.5G or 5G NBASE-T is the nearest thing that covers 5G speeds.
The 2Gb downlink speed is the 5G downlink, the max for the whole 5G connection, so 2.5Gb ethernet is enough for that.
(But PCIe 3.0 of course is from 2010 and isn't too relevant today - 4.0, 5.0, 6.0 and 7.0 have 16/32/64/128 Gbps per lane respectively)
I knew it runs hot before I deployed it, but I wasn't aware that you have to wait for it to cooldown before unplugging, or you get burnt.
3.0 PCIE is irrelevant today when it comes to devices you want on 10G. I'm pretty sure the real reason is that 2.5G can comfortably run on cable you used for 1G[1], while 10G get silly hot or requires transceiver and user understanding of a hundred 2-3 letter acronyms.
Combine it with IPS speeds lagging behind. 2.5G while feels odd to some, makes total sense on consumer market.
[1]: at short distances, I had replaced one run with shielded cable to get 2.5G, but it had POE, so it might contribute to noise?)
There's of course fiber too...