Basically what the title says. Here’s the thing: address exhaustion is a solved problem. NAT already took care of this via RFC 1631. While initially presented as a temporary fix, anyone who thinks it’s going anywhere at this point is simply wrong. Something might replace IPv4 as the default at some point, but it’s not going to be IPv6.
And then there are the downsides of IPv6:
- Not all legacy equipment likes IPv6. Yes, there’s a lot of it out there.
- “Nobody” remembers an IPv6 address. I know my IPv4 address, and I’m sure many others do too. Do you know your IPv6 address, though?
- Everything already supports IPv4
- For IPv6 to fully replace IPv4, practically everything needs to move over. De facto standards don’t change very easily. There’s a reason why QWERTY keyboards, ASCII character tables, and E-mail are still around, despite alternatives technically being “better”.
- Dealing with dual network stacks in the interim is annoying.
Sure, IPv6 is nice and all. But as an addition rather than as a replacement. I’ve disabled it by default for the past 10 years, as it tends to clutter up my ifconfig overview, and I’ve had no ill effects.
Source: Network engineer.
43% of Google traffic is now ipv6 and steadily growing
https://www.google.com/intl/en/ipv6/statistics.html
CGNAT is only a temporary band aid for reaching services that are yet to present themselves on IPV6. It’s relatively expensive to operate.
IpV6 might be largely pointless on a LAN, and sure NAT is fine there, but ipv6 already running large chunks of the world’s mobile infrastructure. It’s not going anywhere.
Upvote for semi-unpopular opinion.
I think you’re wrong about the shortage being ‘solved’ by NAT. NAT is great for LAN and WAN in the developed world, but there are billions of folks in remote developing areas where it’s not much help. It also severely limits the big chunks of address spaces that can be allocated to business, universities, governments, etc. It is not a trivial problem waved away by NAT.
I think it will continue to be a very gradual but relentless rollout of IPv6. Not saying it will be fast. But 30 years from now, if we haven’t destroyed civilization, I suspect IPv4 will be a quaint relic. And IPv6 will never run out of addresses.
There’s a large possibility we’ll run out of IPv6 addresses sooner than we think.
Theoretically, 128-bits should be enough for anything. IPv6 can theoretically give 2^52 IPs to every star in the universe: that would be a 76-bit subnet for each star rather than the required 64 minimum. Today, we (like ARIN) do 32-48-bit allocations for IPv6.
With IPv4, we did 8-24-bit allocations. IPv6 gives only 24 extra allocation bits, which may last longer than IPv4. We basically filled up IPv4’s 24-bits of allocations in 30 years. 281 trillion (2^48) allocations is fairly reachable. There doesn’t seem to be any slowdown of Internet nor IP growth. Docker and similar are creating more reasons to allocate IPs (per container). We’re also still in the early years of interstellar communications. With IPv4, we could adopt classless subnetting early to delay the problem. IPv6’s slow adoption probably makes a similar shift in subnetting unlikely.
If we continue the current allocation trend, can we run out of the 281 trillion allocations in 30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes. Realistically, it’s probably more than 100 years away, maybe outside our lifetimes, but that still sounds low when IPv6 has enough IPs for assigning an IP to every blade of grass, given every visible star has an Earth. We’re basically allocating a 32-48 bit subnet to every group of grass, and there are not really enough addresses for that.
This is the worst math that ever mathed. IPv4 is 32 bits of address space. IPv6 is 128. That is 2^32 vs 2^128. Not 2^52, which isn’t even wrong it’s just weird, hopefully this is just some weird performance joke. There are enough addresses in ipv6 to address every known atom on earth. We aren’t running out anytime soon. 96 doublings of IPv4s address space is a number you can’t fathom.
That wasn’t what I said. 2^56 was NOT a reference to bits, but to how many IPs we could assign every visible star, if it weren’t for subnet limitations. IPv6 isn’t classless like IPv4. There will be a lot of wasted/unrunused/unroutable addresses due to the reserved 64-bits.
The problem isn’t the number of addresses, but the number of allocations. Our smallest allocation, today, for a 128-bit address: is only 48-bits. Allocation-wise, we effectively only have 48-bits of allocations, not 128. To run out like with IPv6 , we only need to assign 48-bits of networks, rather than the 24-bits for IPv4. Go read up on how ARIN/RIPE/APNIC allocate IPs. It’s pretty wasteful.
This whole debate is so tired. Just use IPv6, it’s 2024 and it’s so fucking easy these days.
I have actually found IPv6 simpler to set up and manage than I thought it would be. As I run at least one or two internet facing services from my home network, which I cannot do with IPv4 because my ISP is fully CGNAT. I even successfully set up my own static IPv6 address on my server so that I can just point my domain name at it and then anything I need I can just hit my domain and it will give the IP address instead of me having to remember it.
Granted I have very simple requirements, so It does seem pretty easy, except
- there are still too many devices that don’t support it
- too many ISPs don’t support it, including mine
So switching to IPv6 means running dual stack and setting up a tunnel, and I probably need to relearn firewalls. I’m not sure any of those are very difficult but it’s enough, especially since there’s no clear win here
If Matter-Thread ever gets off the ground that would help: most of my newer IPv4-only devices are home automation so switching to an IPv6-based protocol should finally make that happen