Because this wouldn’t be targeted towards a single device/connection. This is for a household of 5+ streaming 4k, running servers, having cloud (yada yada) IoT devices running simultaneously.
It’s the hobbyist tier. It’s like asking someone “why do you ever need more than one cast iron pan” when they’re into cast iron pan collecting.
20 gig is more than a typical enterprise server currently has. No household needs that much bandwidth. How many households even have more than 3 4k devices at all, let alone using them all simultaneously to stream 4k content.
This is aimed at people who pay 500€ for a pan because it has been treated with handsqueezed mink oil. Stupid enthusiasts with too much money. They will find those, no doubt, but it’s still stupid.
PS: just think about how much power the modem alone will draw.
20 Gig is nowhere near what most current cloud data centers are using. Most existing infra have at least at 100Gbps NICs. State of the art right now is 800Gbps. Your 20Gbps enterprise server might be enough for bare-metal AD, but if you include us-tail latency network storage and all the other fancy stuff you’ll need way more than that. Doubly more so for HPC, ML and other data heavy workloads. Existing links can already see multi-terabit of aggregate throughput, it wouldn’t be surprising if someone decided to have a bunch of HD cameras, streaming, torrenting, etc at their house generating traffic 24/7 because someone thought it was a fun thing to do.
For the gateway switch power draw, I can think of an off-the-shelf software switching solution at 75w, and that’s for 100Gbps. A 20Gbps ASIC switch would be a lot less power hungry than that. If you’re willing to go experimental, here’s a theoretical 400Gbps SmartNIC design that runs at 7w, all you need to do is write a basic L3 switching program with NAT and it should all work.
So you’re justifying 20gigs at home with full blown cloud data centers? There’s literally no feasible use case for customers. Prosumers, maybe, but even those would probably prefer not to pay consumer prices for hardware and power.
Because this wouldn’t be targeted towards a single device/connection. This is for a household of 5+ streaming 4k, running servers, having cloud (yada yada) IoT devices running simultaneously.
It’s the hobbyist tier. It’s like asking someone “why do you ever need more than one cast iron pan” when they’re into cast iron pan collecting.
Is that household currently in this room?
20 gig is more than a typical enterprise server currently has. No household needs that much bandwidth. How many households even have more than 3 4k devices at all, let alone using them all simultaneously to stream 4k content.
This is aimed at people who pay 500€ for a pan because it has been treated with handsqueezed mink oil. Stupid enthusiasts with too much money. They will find those, no doubt, but it’s still stupid.
PS: just think about how much power the modem alone will draw.
20 Gig is nowhere near what most current cloud data centers are using. Most existing infra have at least at 100Gbps NICs. State of the art right now is 800Gbps. Your 20Gbps enterprise server might be enough for bare-metal AD, but if you include us-tail latency network storage and all the other fancy stuff you’ll need way more than that. Doubly more so for HPC, ML and other data heavy workloads. Existing links can already see multi-terabit of aggregate throughput, it wouldn’t be surprising if someone decided to have a bunch of HD cameras, streaming, torrenting, etc at their house generating traffic 24/7 because someone thought it was a fun thing to do.
For the gateway switch power draw, I can think of an off-the-shelf software switching solution at 75w, and that’s for 100Gbps. A 20Gbps ASIC switch would be a lot less power hungry than that. If you’re willing to go experimental, here’s a theoretical 400Gbps SmartNIC design that runs at 7w, all you need to do is write a basic L3 switching program with NAT and it should all work.
So you’re justifying 20gigs at home with full blown cloud data centers? There’s literally no feasible use case for customers. Prosumers, maybe, but even those would probably prefer not to pay consumer prices for hardware and power.