[ppml] IPv6>>32
Tony Hain
alh-ietf at tndh.net
Mon May 9 16:01:34 EDT 2005
David Conrad wrote:
> Michael,
>
> On May 9, 2005, at 9:39 AM, Michael.Dillon at radianz.com wrote:
> >> The net result is that we're poised to burn through a /1 to a /4
> >> of the IPv6 address space in the next 60 years based on our best
> >> current guesses. This makes me extremely nervous.
> > I'm sorry but I cannot understand this sentiment at all.
>
> For me, the sentiment derives from the discomfort of knowingly
> deploying something that is (arguably) broken. I suspect if you went
> back in time and asked Vint or Bob Kahn or any of the other original
> net geeks if they thought IPv4 would ever really be at risk of
> running out, they'd laugh at you.
Burning through a /4 in 50 years is consistent with original assumptions, as
is the point that we have 3/4 of the space set aside for different
allocation approaches if the first proved wrong. It is not that people
didn't learn anything from the first time around, it is just that some want
to be more conservative than others.
>
> > ARIN should completely avoid this type of policymaking. It is
> > not the job of ARIN or any RIR to drive today's policy based upon
> > the hypothetical needs of people 60 years from now.
>
> Hmm. I would've thought this would be pretty close to the actual
> definition of "stewardship".
Yes stewardship is about resource management so that it is available in the
future, but is also about allowing current use of the resource in ways that
don't artificially constrain innovation to past practice. We have to get
past the brain-dead concept that an ISP will micro-manage every connected
appliance on the customer network.
>
> > And our job is not to change IETF designs.
>
> No, market and operational realities change IETF designs as they also
> change RIR policies.
There is a place for RIR policy to feed back into the design process, and
that has already happened with the removal of TLD/NLD designations. That
said, there does not appear to be a consistent mechanism for design goals to
be propagated across the RIRs. The IETF ends up putting a stake in the
ground and the RIRs complain about moving the stake without appreciation for
why a specific set of trade-offs was agreed on.
>
> > Sorry, but I am not going to run a DHCP server on my mobile
> > phone, on my fridge, on my TV or my stereo or my home lighting
> > system.
>
> Well, you might on your phone if it is the gateway for your personal
> area network, connecting all the biosensors and other gadgets
> attached to you. You probably wouldn't run a _server_ on end devices
> like your TV, however I suspect you might on your residential gateway
> (s).
It will undoubtedly take most of a generation to change the mindset, but
DHCP is not a requirement for operating a network. Many ISP and enterprise
operators have come to rely on that tool as part of their access control
infrastructure, but that does not turn it into a required protocol. The only
thing that is required is that a node have an address in the range of the
local subnet and that there is a router which can get bits between that
subnet and others.
>
> > Have you ever heard of something called "working code".
> ...
> > Why should the IETF listen to an idea that has no running code to
> > back it up?
>
> While I might argue the IETF long ago gave up on running code, I
Well some would argue that past IESG's chose to ignore the running code and
the practice of letting the market decide in favor of dictating how networks
should be run, but I digress...
> think the issue here is one of perception. Some might argue that due
> to the fact there is very little actual operational experience with
> IPv6 and, in particular, essentially no operational experience with
> scaling IPv6 anywhere near what it is expected to be able to do, that
> the "working code" of address allocation for IPv6 has not yet been
> defined. What I might suggest we have is an evolving working group
> draft that we're just now getting to actually implementing (and have
> already found some warts)...
I don't hear anyone arguing that we need to keep the current H-D ratio
assumptions. In particular it is RIR policy to use that measure, so it could
be RIR policy to use another value or approach. The argument that it only
gets back 3 bits ignores the impact where your 60 year projection would
become 480 years.
I am not opposed to values other than /48 for customer connects:
http://www.ietf.org/internet-drafts/draft-hain-prefix-consider-00.txt
(comments?) but I am opposed to claiming there is a threat to running out if
we don't revert to the IPv4 practice of minimal/per-host allocations. People
need a way to switch providers without concern that they will have to change
their subnet plan. Some consistent policy buckets will allow them to move to
like service and avoid the pain of a redesign. Aligning those buckets with
ptr zone files will allow each customer's ddns to manage the local
appliances and attach itself to the global tree as a single trusted agent
aggregating the morass behind it.
In any case it is wrong for an ISP to assume that the device at the end of a
particular link is an endpoint handset. The upstream radio link is just
that, a link with interesting characteristics, and the device at the end is
just as likely to be a router as an endpoint. It is likewise wrong to assume
that a single subnet is sufficient for a customer since we know multi-media
bridging is fundamentally broken and media types are evolving all the time.
These are issues that need to be addressed by any allocation policy, but the
FUD simply states that we are wasting space because we are allocating more
than we have in the past. At the point of evaluation 64 bits was sufficient
to meet the design goals of the IPv4 replacement, but at the front of the
bubble there were concerns about sufficient hierarchy, so the whole 64 bits
was given to routing. Now we find that the greedy routing side is jealous
that the host side gets just as much space and is looking for any reason to
grab more. There is no need to revisit the 64 split at this point. Even if
we are wrong 3/4 of the space is still there for other approaches and with
60 years to burn through a /4 we would still have a few decades to argue
over a different approach before the current /3 is consumed. If the H-D
ratio policies were changed to require more efficiency from larger
organizations we would have a few centuries to argue, so we should make that
change quickly and buy ourselves some time to get a century or two of
running code before we get too hung up on micro-managing the customer end of
the allocation.
Tony
>
> Rgds,
> -drc
More information about the ARIN-PPML
mailing list