Basically what the title says. Here’s the thing: address exhaustion is a solved problem. NAT already took care of this via RFC 1631. While initially presented as a temporary fix, anyone who thinks it’s going anywhere at this point is simply wrong. Something might replace IPv4 as the default at some point, but it’s not going to be IPv6.
And then there are the downsides of IPv6:
- Not all legacy equipment likes IPv6. Yes, there’s a lot of it out there.
- “Nobody” remembers an IPv6 address. I know my IPv4 address, and I’m sure many others do too. Do you know your IPv6 address, though?
- Everything already supports IPv4
- For IPv6 to fully replace IPv4, practically everything needs to move over. De facto standards don’t change very easily. There’s a reason why QWERTY keyboards, ASCII character tables, and E-mail are still around, despite alternatives technically being “better”.
- Dealing with dual network stacks in the interim is annoying.
Sure, IPv6 is nice and all. But as an addition rather than as a replacement. I’ve disabled it by default for the past 10 years, as it tends to clutter up my ifconfig overview, and I’ve had no ill effects.
Source: Network engineer.
This is the worst math that ever mathed. IPv4 is 32 bits of address space. IPv6 is 128. That is 2^32 vs 2^128. Not 2^52, which isn’t even wrong it’s just weird, hopefully this is just some weird performance joke. There are enough addresses in ipv6 to address every known atom on earth. We aren’t running out anytime soon. 96 doublings of IPv4s address space is a number you can’t fathom.
That wasn’t what I said. 2^56 was NOT a reference to bits, but to how many IPs we could assign every visible star, if it weren’t for subnet limitations. IPv6 isn’t classless like IPv4. There will be a lot of wasted/unrunused/unroutable addresses due to the reserved 64-bits.
The problem isn’t the number of addresses, but the number of allocations. Our smallest allocation, today, for a 128-bit address: is only 48-bits. Allocation-wise, we effectively only have 48-bits of allocations, not 128. To run out like with IPv6 , we only need to assign 48-bits of networks, rather than the 24-bits for IPv4. Go read up on how ARIN/RIPE/APNIC allocate IPs. It’s pretty wasteful.
I’m fully aware how rirs allocate ipv6. The smallest allocation is a /64, that’s 65535 /64’s. There are 2^32 /32’s available, and a /20 is the minimum allocatable now. These aren’t /8’s from IPv4, let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.
/48s are basically pop level allocations, few end users will be getting them. In fact comcast which used to give me /48s is down to /60 now.
I’ll repeat, we aren’t running out any time soon, even with default allocations in the /3 currently existing for ipv6.
Sorry, but your reply suggests otherwise.
The RIRs (currently) never allocate a /64 nor a /56. /48 is their (currently) smallest allocation. For example, of the ~800,000 /32’s ARIN has, only ~47k are “fragmented” (smaller than /32) and <4,000 are /48s. If /32s were the average, we’d be fine, but in our infinite wisdom, we assign larger subnets (like Comcast’s 2601::/20 and 2603:2000::/20).
Taking into account the RIPE allocations, noted above, the closer equivalent to /8 is the 1.048M /20s available. Yes, it’s more than the 8-bit class-A blocks, but does 1 million really sound like the scale you were talking about? “enough addresses in ipv6 to address every known atom on earth”
The situation for /48s is better, but still not as significant as one would think. With Cloudflare as an extreme example: They have 6639 IPv4 /24 blocks, but 670,550 IPv4 /48 blocks. Same number of networks in theory, but growing from needing 13-bits of networks in IPv4 to 19-bits of networks: 5 extra bits of usage from just availability.
That sort of increase of networks is likely-- especially in high-density data centers where one server is likely to have multiple IPv6 networks assigned to it. What do you think the assignments will look like as we expand to extra-terrestrial objects like satellites, moons, planets, and other spacecraft?
Soon vs never. OP I replied to said “never”. Your post implied similarly, too-- that these numbers are far too big for humans to imagine or ever reach. The IPv6 address space is large enough for that: yes. But our allocations still aren’t. The number of bits we’re actually allocating (which is the metric used for running out) is significantly smaller than most think. In the post above, you’re suggesting 56-64 bits, but the reality is currently 20-32 bits-- 1M-4B allocations.
If everyone keeps treating IPv6 as infinite, the current allocation sizes would take longer than IPv4 to run out, but it isn’t really an unfathomable number like the number of atoms on Earth. 281T /48s works more sanely: likely enough for our planet-- but RIPEs seem to avoid allocating subnets that small.
IPv4-style policy shifts could happen: requirements for address blocks rise, allocation sizes shrink, older holders have /20 blocks (instead of 8-bit class A blocks), and newer organizations limited to /48 blocks or smaller with proper justification. The longer we keep giving away /20s and /32s like candy, the more likely we’ll see the allocations run out sooner (especially compared to never). My initial message tried to imply that it depends on how fast we grow and achieve network growth goals:
I’m at work, I’m not going to go into a thesis on ip allocation.
Correct all noted here https://www.iana.org/numbers/allocations/arin/asn/
If you’re going to go through and conflate 2^128 as being larger than the amount of atoms on earth to a prefixing assignment scheme I’m just going to assume this is a bad faith argument.
Have a good one I’m not wasting more time on this. The best projections for “exhausting” our ipv6 allocations is around 10 million years from now. I think by then we can change the default cidr allocations.
https://samsclass.info/ipv6/exhaustion-2016.htm
Its old sure but not worth arguing further.