Howard, W. Lee
L.Howard at stanleyassociates.com
Wed May 11 19:02:56 EDT 2005
> -----Original Message-----
> From: owner-ppml at arin.net [mailto:owner-ppml at arin.net] On
> Behalf Of Tony Hain
> Sent: Wednesday, May 11, 2005 6:23 PM
> To: 'Howard, W. Lee'
> Cc: ppml at arin.net
> Subject: RE: [ppml] IPv6>>32
> Howard, W. Lee wrote:
> > If
> > these clusters are going to be sliding gracefully from network to
> > network with their /48s, then the /32 aggregation goal is blown.
> No it is not. They are not moving the upper 48 bits, they are
> fitting gracefully into the new provider's existing /32
> aggregate. This is not about consumption without reuse, it is
> about creating a mechanism that prevents artificial lock-in
> due to the pain of rebuilding the entire network.
> Readdressing IPv6 hosts is trivial by design. Renumbering
> network components requires much more work in places that
> currently require manual intervention. At the end of the day
> though as long as the subnet topology stays the same we are
> talking about a string replace on configuration files from
> the old provider's /48 to the new one.
Oh, I see. You aren't asserting that people need to move their
/48s around, only that they need to preserve the hierarchy, and
renumbering a /48 network will be trivial.
If I stipulate trivial renumbering, I will concede the usefulness
of a fixed assignment size for networks. I'm not ready to concede
that everything needs an assignment as if it were a router; how
about it we assign a /64 to "things," and some fixed size for
"routers?" i.e., a single subnet or host gets a /64, and when
a router pops into existence, we assign a /56.
Elsewhere in this thread you said:
> If this is going to be a serious discussion about what is
> reasonable then it needs to include a recognition that routing
> doesn't 'need' more than 45 bits if it is managed to the degree
> that we already understand.
I don't think there's consensus on this point. Neither is there
consensus that routing can handle anywhere near that number of
You also said, in response to Leo:
> > infrastructure demands. The fraction of address space they've been
> > allocated is not a useful metric to judge sufficiency (IMHO).
> It is not a useful metric to judge sufficiency, but like it or not
> it is the metric used to judge fairness. Comparing IPv6 allocation
> practice to IPv4 practice is not overly useful because one is
> allocating the demarc for a network while the other is allocating for
> a specific number of devices.
The metric used by whom? Is it an accurate metric?
Fraction of total space assigned is only useful if you can measure
fraction of total Internet consumed; then one might judge fairness.
It seems to me that IPv6 has a network part and a host part, and the
assertion is that hosts (interfaces) must be assigned large network
numbers in case they someday evolve into something they currently are
> > The magic of a /64 is that it's a single routable entity.
> If I assume
> > that layer 3 networks connect layer 2 networks, I still
> haven't seen
> > any argument here about what a layer 2 network of 2^64
> devices would
> > look like. It's not only inconceivable, it's inconceivable
> to (2^32)
> > power or more, and then we say that we have to assign this enormous
> > set of numbers in groups of 2^16.
> The point is consistency for the end site.
OK, if I stipulate and concede, can we debate the value of the
> > Oh, for the record, Geoff Huston's model said we'd consume
> a /1 to a
> > /4 in 60 years; at the rate of one /1 every 60 years, we run out of
> > IPv6 space in 120 years.
> And also by his model if we simply change the HD from .8 to
> .94 that becomes 1200 years. By all means the right thing to
> do is have a reasonable HD metric.
I'm in favor of increasing the HD ratio required for additional
> >  OK, technically it assumes an exponential progression, but a
> > predictable exponent. And I don't think most people see that.
> What most people seem to miss is that if 2^32 == 30, then
> with no other changes 2^45 == 245,760 years. Other than Geoff
> & David's presentation on why the HD metric is wrong, I have
> yet to see a valid argument on why 45 bits is not enough for
> the foreseeable lifetime of any protocol.
Well, people keep arguing that we can't predict the future, and
that smart people will come of with all kinds of incredible uses
for these numbers. We're not going to burn 4 billion every 30
years, we're going to burn at ever increasing rates. And based
on the science fiction I read on IPv6 debates, the rate will
increase, and the rate of increase will increase. But I can't
predict those rates, so I'm just trying to find the right balance
between easy, fair, and durable, and I don't agree with you on
where that balance is.
For those not trying to follow the math, (2^(45-32))*30 is how
you get 245,760. You chose 45 bits because you assume /48
assignments, but only from the 2001/3 we're currently using?
More information about the ARIN-PPML