There is a recurring phenomenon in networking that shows up every few years with remarkable consistency. Someone discovers that the Internet exists, spends a short amount of time looking at it from a comfortable distance, and then concludes that the entire system would obviously be better if it were redesigned from scratch. The latest version of this ritual is currently circulating under the label “IPv8”, usually wrapped in familiar language like “secure by design”, “clean slate architecture”, and the equally timeless claim that IPv6 somehow “didn’t really solve anything”.
If you have worked in operational networking for more than a few years, none of this feels new. It is the same architectural déjà vu, just with a different protocol number and a slightly more confident tone. The phrase “secure by design” is usually where things start to drift away from engineering and into marketing. In real protocol design, security is not a label. It is the result of explicit threat models, clearly defined trust boundaries, and assumptions that still hold when someone actively tries to break them. It also includes the less glamorous part: behaviour under partial failure, misconfiguration, and hostile environments that were not part of the original design meeting. This is exactly why RFC 7258 exists, which formally treats pervasive monitoring as an attack, and why RFC 6973 exists at all, which documents privacy considerations at protocol level. These documents are not theoretical decoration. They exist because the Internet has repeatedly demonstrated that “more visibility” and “more central control” tend to scale very well — unfortunately for the wrong side of the trust boundary.
Which brings us neatly to the next familiar pattern: IPv6 criticism as a justification for something new. One of the more persistent claims is that IPv6 “did not solve fragmentation properly” or that it simply inherited too many constraints from IPv4. This usually sounds plausible until you remember what IPv4 fragmentation actually was in practice: not a carefully designed mechanism, but a long-running compatibility workaround that became increasingly fragile as the Internet scaled and address space stopped being a manageable constraint. IPv6 did not “fail” to handle fragmentation. It removed routers from the fragmentation business entirely, which is what happens when you have spent enough time debugging path MTU issues in production networks and decide that routers should stop making creative decisions about packet structure. Fragmentation in IPv6 is an endpoint responsibility, enforced through Path MTU Discovery. That is not an omission. It is a decision based on operational experience, which is usually the part of protocol design that does not fit well into slide decks. Reintroducing scarcity-driven behaviour into a 128-bit address space does not fix anything. It simply reopens a category of problems that IPv6 explicitly removed from the core routing domain because they do not scale well, do not fail gracefully, and tend to produce interesting incidents at exactly the wrong time.
At this point it is worth restating something that clean-slate proposals tend to rediscover every decade or so. The Internet is not a design exercise. It is not a controlled environment where all participants agree to upgrade on a schedule and validate assumptions in a clean testbed. It is a layered system of routing infrastructure, hardware forwarding pipelines, operating system stacks, middleboxes, and application behaviour that has accumulated over several decades of incremental adaptation. Even IPv6, which is the actual next-generation Internet protocol in production terms, did not arrive through replacement. It arrived through coexistence, dual-stack deployment, translation mechanisms like NAT64 and DNS64, and operational compromises like Happy Eyeballs, which exist purely because the Internet does not behave like RFC diagrams. So when a proposal implicitly assumes coordinated global migration or breaks coexistence assumptions, it stops being an architectural proposal and becomes a thought experiment that has not yet met production traffic.
There is also a persistent misunderstanding about what an IETF draft represents. The IETF is intentionally open, which means it accepts everything from production-ready standards to ideas that are still in the “this seemed reasonable at the time” phase. The existence of a draft is not an indicator of viability. It is an indicator that someone has written something down in a format that other people are allowed to comment on. The path from draft to RFC, and from RFC to actual deployment, is not a formal checkbox process. It is an adversarial filtering process driven by interoperability, implementation reality, and operational survivability. Many proposals never make it past the first contact with independent implementations, which is usually where assumptions quietly collapse. And yes, one occasionally wishes there were a slightly stronger early filtering stage. Not to suppress ideas, but simply to save everyone else from reading architectural poetry that assumes the Internet behaves like a greenfield lab environment. But that is a different discussion.
What remains consistent is that most so-called next-generation Internet protocols do not fail because they are ignored. They fail because they do not survive contact with scale, heterogeneity, and operational reality. Which brings the discussion back to IPv6, the protocol that is often treated as incomplete simply because it is inconveniently already deployed. IPv6 is not missing as a technology. It is widely implemented, hardware-accelerated, and supported across all major operating systems. It routes, it scales, and it already carries a significant portion of production traffic globally. The gap is not technical capability. The gap is deployment discipline. And that gap is rarely technical in nature. It is organisational, economic, and sometimes just operational inertia. Migration work is still work, and production networks have a well-known preference for stability over architectural enthusiasm.
So when another “next-generation” proposal appears, the only useful question is not whether it sounds modern, or whether it fixes a selectively chosen problem from IPv6. The question is whether it actually improves something that cannot already be addressed within the existing architecture without breaking everything that currently works. Because at this point, the Internet does not suffer from a lack of new protocols. It suffers from a surplus of people who think the existing one just needs to be ignored for a while until the next idea feels more exciting. And IPv8, in its current form, sits very comfortably in that category.