[ppml] 2005-1:Business Need for PI Assignments

Tony Hain alh-ietf at tndh.net
Wed Apr 27 17:43:34 EDT 2005

Leo Bicknell wrote:
> In a message written on Wed, Apr 27, 2005 at 10:15:20AM -0700, Tony Hain
> wrote:
> > What is absurd is making generalizations about design choices without
> > acknowledging the trade-offs.
> It's equally absurd that those who designed IPv6 to 10 year old
> specifications turn a deaf ear to those who've been operating
> networks ever since.  We've learned a lot of things in the last 10
> years, and many of us don't think it's inappropriate to incorporate
> many of them into the protocol before it's deployed.

No, you haven't learned, but it is reasonable to make adjustments prior to
wide-scale deployment. 

> > long past time to get over it; auto-configuration requires numbers on
> the
> > order of 60 bits no matter if you choose random numbers or have a
> central
> > registry like the IEEE. In fact the RFID crowd wants to stuff their 96
> bit
> It's statements like this that have operators tune you out completely.
> Autoconfiguration does not require 60 bits.  AppleTalk had
> autoconfiguration circa 1991, with a 8 bit host space.  Other
> failures of the protocol aside, the numbering side of things worked.
> In 8 bits.

For a small number of devices per segment, and with massive issues involving
collisions when segments partitioned and reconnected. I would not want to
reconstruct the number of hours I personally wasted dealing with AppleTalk
failures, and it was a third order protocol used by a few admins at the

> This is the primary problem with the operator <-> IETF interaction.
> More than a few times the IETF has come out with something saying
> "it will never work if you don't do it this way".  The operators
> then point to it up and running on the live network, shrug, and the
> IETF runs back to figure out why the operators don't believe them.

The major problem is that the current crop of operators generally believes
they have all operational knowledge and that if 'we only keep doing it the
way we already know' thing will work fine. Some of us burned a lot of late
hours migrating a collection of random protocols to IPv4 and learned the
hard way that some approaches are just bad ideas. 

> > thingies into an IPv6 address so one could argue that we were short
> sighted
> > in giving the bits to the routing function and should really be
> squeezing
> > them back down to 32 bits. In any case if routing can't do the job with
> 64
> > bits then it is time to find a new routing system.
> One of the lessons operators learned the hard way was that one size
> does not fit all.  Some customers can have a /29 and be happy until
> the end of time, others need a /6 and still want more.

Yes, but the IETF was looking at the operators that were insisting on /128's
per customer and noting that this would lead to another NAT disaster. /48 is
technically sufficient even if it is more than many will need.

> The other lesson the operators learned was that by the time you
> realize the problem it's going to be extremely painful to fix it.

But they refuse to acknowledge that doing something different than past
practice creates a different set of problems that are equally painful to

> Indeed, the recent presentation at ARIN illustrates the problem.
> Potential usage for a /4 in our lifetimes.  Given we had 128 bits to
> work with that outcome is a failure.  There's no other way to describe
> it.

On the contrary it is not a failure; it says we hit the mark. We are working
from a /3 and even if we missed by 2x by 2050 we are still within the design
goal. That said, it is still reasonable to change the H-D metric for large
providers so we are not pushing that limit.

> Let me repeat, because this isn't a proposal to change something.
> If the operators do exactly what the IETF told them to do we consume a
> /4 in the next 50 years.
> Now, by your own statement, if the routing system can't do the job
> (consuming a /4 in the next 50 years is a failure to me) then we need
> a new routing system.  But this is the IETF's routing system, not
> something the operators came up with that's already being shown to
> fail.

That is just wrong. The existing system was developed to the requirements of
the providers and through RIR allocation policies continues to be driven by
the providers. The IETF said there is no technical justification for longer
than a /48, and that the issue about switching providers is something that
is important yet the ISPs consider that less desirable because they like
provider locking. 

> > Much of the insistence on 'doing it the same as IPv4' is in fact a
> > short-sighted approach that explicitly curtails the ability to do new
> things
> > in the application space. Yes we are bad predictors of the future, but
> we
> More addresses do not allow you to "do new things in the application
> space".  This is snake oil being sold by the IETF.  I have yet to
> see even a single conceptual idea for IPv6 that cannot be done over
> IPv4, much less one with working code.  From AppleTalk to IPX to
> DECNet to XNS to IP the applications have stayed the same, and have
> not cared one iota about address size.

All of your examples are limited address space protocols where the node is
only given a single address. Look simply at XP and you will find that the
stack has an address for inbound connections and another random number for
use by the web browser to avoid privacy issues from web site tracking tools.
This is working code and it is being discussed for other applications like
voip to do what many pbxs do today by assigning a central 'from' string so
that return calls don't go directly to the employee's desk. 

> The exact details of how the protocol works may change slightly,
> in some cases, but the majority of applications don't care.  One
> of the smart things smart people did years ago was to layer things.
> The application, up at layer 7, doesn't really care what the layer
> 3 network does.  Indeed, with a well written application I can run
> telnet from my freebsd box over IP, XNS, or AppleTalk, and the
> application doesn't even know the difference.  It's all stuffed in
> a library.

You make light of 'well written'. Too many app developers think they are
smarter than the stack and reach down to grab details rather than use the
available abstraction. Yes people can build apps that don't care, but if
apps were really protocol agnostic it would not be such a big deal to do
something as trivial as switch versions of IP. 

> The only way IPv6 "enables new applications" is via more address
> space.  I can't number all the grains of sand in the world today,
> so an application that depends on talking to all of them doesn't
> work.  More addresses make it work.

This is an example of the closed mindedness that leads to precluding
innovation. Yes IPv6 provides more addresses. One use of trivial multiple
addresses per interface would be to allow a multi-function server to have
independent addresses per app so that the ephemeral port ranges could be
shared and firewall configuration could be simplified. Yes that can be and
is done in IPv4, but to a limited degree due to the limited addresses
available. Another use might be application limitations based on the
cryptographic authenticity of an address, which requires a substantial
number of bits. Yet another use would be to build auto-configuring consumer
routers that support a wide range of link technologies. The point is we
don't know what might be possible because everything to date has had a
limited number of bits to work with for identifying hosts on a segment.
Rather than force a continuation of that by insisting on unnecessary
conservation, read your own comments above and realize that even 44 bits is
sufficient for the public routing system as we know it without any changes. 

> > had the foresight to not allocate the entire space up front. We are
> working
> > with 1/8 of the space right now, so if the current policies prove to be
> > insufficient over the next 50 years we have the opportunity to start
> over a
> > few more times which should get us well beyond 100 years. As Geoff &
> David
> That's short sighted.  IPv4 is going to last at least 40 years in
> the end, probably more like 50.  Given the rate at which we learn
> I think it's not unreasonable to expect an order of magnitude
> improvement.  That means we should be looking at a 400 to 500 year
> timeframe.  The computing industry and our knowledge grow exponentially,
> not linearly.

The last IPv4 address will never be issued because either nobody will care,
or nobody will be able to afford it. Given that we are arguing about the
lifetime of 1/8th of IPv6 being 50 or 100 years, the 400 to 800 numbers
don't see out of line, even if we do nothing. We are likely to change the
H-D ratio for large providers so this whole discussion goes away. 

> > lifetime issues. The driving issue for a fixed size was to allow
> > organizations the freedom to switch providers without having to rebuild
> > their entire subnet plan. See
> I don't know anyone who redoes thier subnet plans today when they
> renumber.  Perhaps that used to occur, but today if someone comes
> to me as a provider and says they have 12 subnets and want to
> renumber into my space, I give them, imagine this, 12 subnets.

So if someone gets a /12 from an ISP that only has a /12 then moves to your
network you will also give them a /12? This sounds like a business
opportunity to be in the position of transient address space allocator. 

> I don't see why the same thing couldn't happen in IPv6.  If someone
> had 12 IPv6 subnets and came to me I would give them 12 new subnets.
> If any renumbering occurred in the past it was due to gross ineffiency
> in their numbering plan.  That's been worked out and we're going
> into V6 with our eyes wide open.

You are thinking about larger organizations that have technical knowledge
and the ability to describe their network. If a consumer came to you with an
auto-configuring router and no concept of IP subnets what would you say?
Since multi-media bridging is an even more brain-dead idea than nat, you
would need to know if they had power-line, 1394, or any newer non-Ethernet
based media attached to the device. What if the attachment CPE were a cell
phone and the device behind it was a car? How many subnets would an auto
manufacturer build into the chassis? How would a consumer know? The point is
you don't need to preclude those environments because you have more than
enough space to allow them without pre-biasing the system against their
potential deployment. 

> The idea that an ISP would say "oh, you had 12 subnets at your old
> provider, so we're only going to give you 6" is absurd.  As long
> as they don't get in trouble with an RIR an ISP will do whatever
> it can to make a customer happy, after all they are paying the
> bills.

It is not absurd if you don't happen to agree with the other provider on the
interpretation of the RIR policy. 

> > This thread is about PI space. One of the things that is not discussed
> much
> > is changing the overall routing model from 'everyone has to know
> everything'
> > to regional knowledge. The routing community is saying they don't want a
> > swamp, but at the same time they don't want to change anything about how
> > they make routing decisions. The business community is saying that a
> Well, I don't know about others.  I'm willing to change, but let me
> start with something the IETF needs to learn.
> Networks are not regional.
> Networks are not regional.
> Networks are not regional.
> Networks are not regional.
> Networks are not regional.
> Networks are not regional.
> Networks are not regional.

Political edicts mean traditional IP topologies are irrelevant. 
Political edicts mean traditional IP topologies are irrelevant.
Political edicts mean traditional IP topologies are irrelevant.
Political edicts mean traditional IP topologies are irrelevant. 
Political edicts mean traditional IP topologies are irrelevant.
Political edicts mean traditional IP topologies are irrelevant.

We don't have any yet, but they are looming.

This is not an IETF driven thing. The IETF is about defining standard
approaches to the problems of the day. Yes they go off the rails from time
to time and try to tell people how to run their networks, but in general all
they do is tell vendors how to build products that will interoperate and
solve the operator problems. 

> It is possible to have a regional network.  The great firewall of
> China and everything behind it is a good example.  However that is
> very much the exception and not the rule.  Businesses are actually
> worse on this point than ISP's.  The fact of the matter is that
> network traffic crosses boarders for reasons that have nothing to
> do with geopolitical boundaries, but all about economics.

Governments have the ability to change the economics if they choose to. So
far the Internet has been left to run free reign, but the economic powers in
the traditional telecom world are putting pressure on their governments to
fix that problem. All it would take is for major businesses to insist that
the ISPs are being unreasonable in this strict topology aggregation approach
and there would be a significant shift in Internet economics. 

> What's worse is that these parameters change on a daily basis.
> @home went from the top of it's game to nothing.  Cogent built a
> business out of stitching together a lot of smaller networks.  Asian
> countries sent all their bits to the US and back for a long time,
> only recently building local exchanges.

These are all lightweight from the perspective of the ITU. Exchanges and
political number-plan aggregation are a well understood business practice as
are bypass arrangements. Things can be done differently than the traditional
RIR/ISP address management model and still be operationally viable. They
will not scale in the same ways but they can be made to work. From a
political perspective that is all that matters. 

> The whole reason the Internet works today is that it is adaptable.
> Partial routes, full routes, peering here but not there, transit,
> partial transit, they all serve a place.  While I don't know what
> would be the best way to "upgrade" the routing system, I know some
> things that don't work.  The first of them is any expectation that
> a power heirachy will stay in place for any length of time is dead
> wrong.  We've got 5,000 years of recorded history to prove that.
> So building a heirarchial routing system won't work.  Please stop
> trying.

The existing hierarchy model is by the insistence of the ISPs. Routing
protocol stability and memory management have been paramount in leading us
to where we are. This is not some edict from elsewhere, it is homegrown. The
reason you have the options you do today is that I for one insisted that
there was not a 'single core' network that everyone else defaulted to. We
were doing warped things with EGP and early BGP that lead to the array of
routing knobs you take for granted today. 

I actually don't thing that BGP itself is all that broken. What is broken is
the perception that we need strict provider based aggregation to scale. I
need to refresh it in the IETF directory but
shows an approach (and there are others based on geo-political regions) that
allows distribution of the PI deaggregated noise by using exchange points to
realign topology. Yes current business practice routes around exchanges for
the most part. If that were the only path for enterprises to acquire PI
space though, I suspect the drive for PI would win out and force some
rerouting of topology to fit. Yes this is an economics game, but there are
many more parameters than simply what is the shortest fiber path. Given a
choice between provider-lock/renumbering-pain vs. a little bit more for a
diverted fiber run, I am sure businesses would favor the known cost of fiber
over the unknown and potentially lethal costs of yielding to the provider.  

> > context using strict geographic allocations. ISP's don't like either of
> > these because it changes the relationships and perception of roles, but
> the
> > overall result fits well with existing practice in non-IP networks.
> I note that all the other existing networks are being phased out
> and moved over IP networks at an ever increasing rate.  Frame, ATM,
> Circuits, Telephony, they are all moving over IP packet based
> networks.  While some of the other schemes worked well for a while,
> in the end they will be replaced by something better.

As will IPv6. I have no illusions that IPv6 will actually exhaust the space,
because there will be some other reason to replace it long before that
happens. It is a good protocol, but it is not the end of protocol

> > driven decisions. Unfortunately there is no trivial technical metric
> that
> > draws a clean line in the sand about who gets to have a routing slot and
> who
> > doesn't. Once you acknowledge there is no technical metric the question
> > becomes who's political approach wins.
> Who gets a routing slot is not a technical question.  It has an
> upper bounds defined by a technical limitation (how many routes can
> the system support), but inside that technical limitation it is a
> political and economic problem.  Given that it is political and
> economic, and that the IETF is full of technical people, I recommend
> they stop trying to "solve" that problem.

The IETF is not trying to solve the problem, the RIR's are. The proposal at
the recent ARIN meeting was to use possession of an AS number as the
technical metric for a routing slot. Unfortunately that metric only requires
that you have connections to more than one provider. There is a vast
difference between number of connections and need for a routing slot. 

> Maybe the IETF has some grand vision of the future they haven't
> been able to articulate.  That said, the operator community is
> smart, and from where I sit is looking at IPv6 and laughing.  The
> emperor has no clothes.  It's IPv4 with bigger addresses, nothing
> more, nothing less.  When you have college educated people who have
> 15 years experience running networks looking at the proposal and
> going "that doesn't make a lot of sense" something is seriously
> wrong.

What you have is a bunch of self-appointed demi-gods claiming they know how
networks work when all they really know is the IPv4 Internet they inherited.
They believe that because things developed a specific way in IPv4 that we
really should or even need to continue that for all possible network
deployments in the future. They specifically refuse to step outside their
closed box and realize that there are alternatives that were not possible in
the past. In particular explicit management of device addresses by providers
is a historical artifact of telephony, X.25, F/R, ATM, and IPv4 where
businesses could buy a large enough block to do their own thing but the
average person could not. In the network connected appliance environment we
are heading into you as a service provider should not want to deal with
every power cycle event on every consumer appliance at the customer prem.
You should not care how many different media technologies they deploy
(implying number of subnets) as long as they pay their bills. It will be
easier and cheaper for everyone if ISP just stop trying to micro-manage
their customers. This only forces masking technologies like nat because the
customer in the end will not put up with it.

The Internet was built by tunneling over the telco's that refused to provide
the applications and services that the innovators around the edge were
creating. Innovators can and will tunnel over brain-dead ISPs that try to
restrict the freedom of the edge networks in the future. There is no
technical reason to be more conservative with IPv6 addresses than we already
are, but there may be business differentiation reasons for longer prefixes,
which leads to a need to standardize some smaller buckets so that we avoid
problems when people switch providers. 


More information about the ARIN-PPML mailing list