20 Gig is nowhere near what most current cloud data centers are using. Most existing infra have at least at 100Gbps NICs. State of the art right now is 800Gbps. Your 20Gbps enterprise server might be enough for bare-metal AD, but if you include us-tail latency network storage and all the other fancy stuff you’ll need way more than that. Doubly more so for HPC, ML and other data heavy workloads. Existing links can already see multi-terabit of aggregate throughput, it wouldn’t be surprising if someone decided to have a bunch of HD cameras, streaming, torrenting, etc at their house generating traffic 24/7 because someone thought it was a fun thing to do.
For the gateway switch power draw, I can think of an off-the-shelf software switching solution at 75w, and that’s for 100Gbps. A 20Gbps ASIC switch would be a lot less power hungry than that. If you’re willing to go experimental, here’s a theoretical 400Gbps SmartNIC design that runs at 7w, all you need to do is write a basic L3 switching program with NAT and it should all work.
So you’re justifying 20gigs at home with full blown cloud data centers? There’s literally no feasible use case for customers. Prosumers, maybe, but even those would probably prefer not to pay consumer prices for hardware and power.
20 Gig is nowhere near what most current cloud data centers are using. Most existing infra have at least at 100Gbps NICs. State of the art right now is 800Gbps. Your 20Gbps enterprise server might be enough for bare-metal AD, but if you include us-tail latency network storage and all the other fancy stuff you’ll need way more than that. Doubly more so for HPC, ML and other data heavy workloads. Existing links can already see multi-terabit of aggregate throughput, it wouldn’t be surprising if someone decided to have a bunch of HD cameras, streaming, torrenting, etc at their house generating traffic 24/7 because someone thought it was a fun thing to do.
For the gateway switch power draw, I can think of an off-the-shelf software switching solution at 75w, and that’s for 100Gbps. A 20Gbps ASIC switch would be a lot less power hungry than that. If you’re willing to go experimental, here’s a theoretical 400Gbps SmartNIC design that runs at 7w, all you need to do is write a basic L3 switching program with NAT and it should all work.
So you’re justifying 20gigs at home with full blown cloud data centers? There’s literally no feasible use case for customers. Prosumers, maybe, but even those would probably prefer not to pay consumer prices for hardware and power.