[arin-ppml] IPv4 is depleted today - unrealistic statements about IPv6 inevitability

William Herrin bill at herrin.us
Wed Sep 3 12:15:01 EDT 2008

On Wed, Sep 3, 2008 at 8:00 AM, Eliot Lear <lear at cisco.com> wrote:
> Let us agree that there is no "one size fits all" approach, and
> then we can apply some assumptions that argue in favor of geographically
> assigned addresses (which is really where your proposal leads).  There
> is a reasonable argument that such addressing is scalable from a routing
> system standpoint, if you can get to that state.  To the best of my
> knowledge, nobody has gotten to that state.


We explored the geographic approaches very thoroughly on RRG. The
bottom line is that while origin-only ASes can get some mild benefit
from better geographic aggregation, transit ASes get bupkis. There's
no way to drop the long prefixes from a transit-AS RIB without
damaging its reliability, even when they're adjacent and apparently

Further geographic aggregation beyond the RIRs is a blind alley.

On Wed, Sep 3, 2008 at 10:11 AM, Paul Vixie <vixie at isc.org> wrote:
> does parallelism at the node level get us to the point where the speed of
> light isn't enough for route propagation among the number of routes and
> nodes we can have, and the whole thing gets to what reciprocating engine
> people call "valve float" where convergence never occurs?


I believe we've passed that point already. The table as a whole
doesn't converge, doesn't reach a state where everybody agrees where
each route should go. But the individual routes in the table do

The issue isn't where the table as a whole doesn't converge, its where
do we reach a point where we can't build a cost-effective machine that
can recover in a timely fashion from a nearby link failure that churns
a large number of routes. That's where parallelism gains us at least
an order of magnitude just by moving today's COTS server technology
into the routers.

Cost-effective. Timely. These are soft limits. There's no definable
point at which the system simply collapses; path outages just slowly
get a hair longer and routers slowly get more expensive. Ever so
slowly leading us up the garden path.

> note that if RRG has dissenting views, then someone who disagrees with tony
> li's views from denver, could ask for a speaking slot in los angeles.  the
> community ought to be open to well reasoned arguments from many perspectives.

I'd take you up on that but I plan to go to Minneapolis this year and
I can only do a limited amount of globe trotting.

On Wed, Sep 3, 2008 at 10:17 AM, Howard, W. Lee
<Lee.Howard at stanleyassociates.com> wrote:
> [good work is being done on routing scalability] > They don't
>> buy growth forever, but they buy a lot more than Tony thinks.
> Great!  Can I (if I were a major ISP) buy 10 routers using this
> architecture in time to design, test, and deploy before the
> inflation of the routing table presumed in this thread?


Given that we're two full product cycles away from having 2M routes in
the table even at the worst case, the shipping equipment already
handles 1M routes, and I don't think we need to handle more than 8M
IPv4 routes even at the endgame, I'm going to say yes: hardware will
be available on time. It will tend to disrupt IPv6 deployment however
because the time frame is short and IPv6 routes consume the same kind
of resources as IPv4 routes.

>> As long as the /24 lower boundary on
>> routability holds, IPv4 will stay on a logarithmic growth
>> curve that flattens out somewhere around 7M or 8M routes.
> Is that a fair assumption?  The argument I thought I was
> reading said that aggressive deaggregation was easier and
> cheaper than deploying IPv6.

Ubiquitous change is expensive. Making prefixes longer than /24
globally routable requires a ubiquitous change. Hence it'll happen
slowly if at all.

Adding /24 routes requires no change. It's the path of least resistance.

>> IPv6 migration has a cost
>> function and so does IPv4 growth. Where do the cost lines
>> cross? When does it become less expensive to deploy IPv6 and
>> exert the push that brings
>> IPv4 to a close?
>> The plain truth is that we don't have enough data to do
>> better than guess at the answer to that question.
> I agree with your question.  What data would we need?  Is it
> possible to get such data, so network operators will be able
> to make well-informed design and purchasing decisions?

We'll need to get to 1% to 2% usable and used deployment of IPv6
before we'll be able to get a realistic idea of its cost curve. The
IPv6-first fallback to IPv4 design error in IPv6 prevents  an accurate
read until there's enough deployment for it to show its impact. As I
recall, current deployment is around two hundreds or two thousandths
of a percent.

For IPv4, we'd have to make some assumptions which won't be well
grounded until after free pool exhaustion. How much does an IP address
have to cost before its worthwhile to sell it? Is it worthwhile to
sell addresses at $1 each? I doubt it. $10? Maybe. We won't know what
an IPv4 address is worth until a market tells us and it won't tell us
until the free pool is gone.

>  I saw
> in another post you believe it could go from 8:1 to 2:1, which
> gain is largely removed be the address space overhead.  I still
> think deaggregation is a stupid way to do TE, and that BGP needs
> another knob for that purpose, but I understand the difficulty
> in getting that knob adopted

How do you do TE with a BGP knob? Attach a maximum distance so the
more-specific doesn't propagate far? Propagate only the aggregate but
with a stochastic function which distant routers are expected to use
when they have more than one path available? I'm sure if someone
actually comes up with a way to implement a BGP knob for the kind of
TE we do with disaggregation, it'll find itself on the implementation
fast track.

Between TE and the equivalent of "/24 for multihoming" you're not
going to get better than 2:1. That IPv6 is currently at 1.2:1 only
reflects its lack of deployment.

> (unless somebody important decides
> to suppress more-specifics where an aggregate is announced and
> otherwise override deaggregation TE).

Can't do it in a transit AS. More-specifics are not necessarily TE;
they're also downstream multihoming. Dropping them lowers your
standard of reliability. If you're Cogent, maybe that's not a problem
but if you're Verizon it sinks your ship.

> You know, if all that happens with IPv6 is that it generates
> enough innovation in IPv4 to keep the Internet largely intact
> and stable, I'd be happy.
> I understand your points, and you're largely right, but where
> we have insufficient data it still looks to me that
> when comparing cost and risk, sticking with
> IPv4 is higher on both in the long run.

Call me a pessimist but I foresee a future in which folks choose the
immediate path of least resistance 99% of the time. The other 1% isn't
enough to shift the IPv6 cost line under the IPv4 cost line and IPv6
doesn't move below the IPv4 cost line without a lot of help.

Bill Herrin

William D. Herrin ................ herrin at dirtside.com bill at herrin.us
3005 Crane Dr. ...................... Web: <http://bill.herrin.us/>
Falls Church, VA 22042-3004

More information about the ARIN-PPML mailing list