Basic Techical Background and Discussion

Howard C. Berkowitz hcb at clark.net
Tue Jan 21 09:39:10 EST 1997


I agree that the most meaningful participation in the NAIPR list requires
significant background in addressing and routing practice. I respect people
here who know when they don't know.  I also respect people who are
legitimate experts.  To help the first group, here are some notes on things
that might be less familiar, for example, to someone very experienced at
web services but less so with other aspects of Internet service provision.
I'll add  references to bibliography provided by others.  This is something
of a coffee-free brain dump, so I apologize for any editing errors.

Basic Terminology
-----------------

In current Internet practice, the terms "network" and "subnet" are
obsolete.  Instead, address space is allocated in CIDR blocks, whose prefix
length is shown with a value shown with a slash following the address:
192.168.0.0/24 is a "24-bit" prefix corresponding to a single Class C.  The
original IP RFC, http://ds.internic.net/rfc/rfc760.txt didn't have a
concept of classes; everything was an 8-bit prefix that allowed as many as
200+ networks!  Remember, this was 1981.  Within the same year, it was
realized that this was not enough, and classes were born in
http://ds.internic.net/rfc/rfc791.txt

I always regret it, but North American English is a language in which we
drive on parkways and park in driveways.  So when we speak of a "shorter"
prefix such as 192.168.0.0/23, we speak of a block of addresses that can
hold more, not less, hosts.

In traditional subnetting, introduced a few years later in
http://ds.internic.net/rfc/rfc950.txt we start with a classful prefix
(i.e., a /8, /16, or /24 block corresponding to a Class A/B/C), and extend
the prefix to the right, creating more and more prefixes that can hold less
and less hosts.  This is useful because individual media generally need
unique prefixes, and LANs need those.

Traditionally, addresses were assigned to organizations as Class A/B/C
networks, and these networks were advertised on global Internet routers.
Individual subnets were not advertised. People spoke of advertising summary
routes that covered all their subnets.  If you think of subnetting as
extending the prefix to the right, summarization moved it back to the left.

Some organizations and network providers had multiple contiguous networks
assigned.  The idea of supernetting was introduced in
http://ds.internic.net/rfc/rfc1338.txt as a means of summarizing multiple
summaries, further reducing the number of routes reported.  This was a 1992
RFC intended as a 3-year fix.  It matured into CIDR.  See
http://www.rain.net/faqs/cidr.faq.html and a series of RFCs beginning with
http://ds.internic.net/rfc/rfc1518.txt

The best table I know of to see how many addresses you'll get for a given
prefix length is in http://ds.internic.net/rfc/rfc1878.txt.

Why was this needed?
--------------------

Several reasons.  One was a shortage of assignable prefixes in the Class B
range.  Class B's were too large for many enterprises, but Class C was too
small.  Assigning Class B's because they were convenient "trapped" many
addresses, much as the area codes for Nevada and Rhode Island "trap"
telephone numbers that would be useful to have in Lower Manhattan.

Without getting into BGP details, and certainly not getting into the
conspiracy theories, commercially supported routers started to have memory
and CPU limitations in handling the global routing table, and in processing
it when certain types of changes, some pathological, occurred.

A general operational requirement was seen that short-term router survival
depended on not having too many routes in the global routing table.
Uncontrolled allocation was causing the number of routes to double every
5-9 months, but router power to handle more routes  was doubling about
every 18 months.

Why Aggregate into the Big Guys?
--------------------------------

The only method to reduce route growth about which consensus could be
reached is to aggregate advertisements as much as possible.  Since most
organizations did not have physical connections to one another, just as a
small telephone company in rural Texas does not have a physical connection
to a small telephone company in rural Virginia, most organizations in fact
connected to a "backbone," once a formal structure but now a group of large
service providers.

Since most organizations connected to large providers anyway, either
directly or through local/regional providers, the idea of "provider-based
aggregation" emerged.  If smaller organizations could include their route
advertisements in those of a major carrier, and people needed to go through
that major carrier to reach them in any event, growth of the global routing
tables could slow.

Several conspiracies are usually brought out here:  one of router
manufacturers that they forced an aggregation strategy that preserved the
life of their products, and one of major providers who wanted to "marry"
small clients to them.  I will simply wave to the black helicopters and go
on; whether or not there is a conspiracy is a matter of faith.  My personal
position is there is not, but rather a series of decisions made on the fly
to solve growth problems while maintaining compatibility.

As several people have pointed out, there really has been an economic
disincentive for small organizations to get "provider-independent (PI)"
space, which is what the registries allocate.  The registries encourage
small organization to "borrow" address space from an "upstream" provider,
and actively discourage them to get PI space.   I believe the underlying
reason has been to encourage aggregation for the stability of the global
routing system, rather than presenting a bar to entry for small ISPs.
Others will differ.

What's the Problem with Provider-Based Aggregation
--------------------------------------------------

A couple of things.  One, if a small ISP or enterprise used address space
suballocated from the major service provider block, and wanted to change
providers, they (and their customers) would eventually have to change their
numbers to those in the new provider's block.

Two, this model assumes a strict hierarchy of enterprise, to regional, to
national.  What if the top-level provider breaks?  Again without getting
into BGP and operational practice detail, provider-based allocation becomes
messy when a lower-level ISP or enterprise wants redundant connectivity
through two or more major carriers.  Do they get their own addresses that
will then be advertised into the global routing table by both major
carriers?  If they use only one carrier's address space, how does the rest
of the world know they are reachable through the second carrier?

There are no perfect answers to either problem, especially the second.  The
general strategy to deal with the first is to recognize virtually all
enterprises and ISPs will need periodically to renumber for an assortment
of reasons, and to accept this.  Once this is accepted, effort can be
expended to make old and expecially new networks "renumbering friendly."
Techniques for doing this have been the focus of the IETF PIER Working
Group, see http://ds.internic.net/rfc/rfc2071.txt and
http://ds.internic.net/rfc/rfc1338.txt .  The crackers of the world have
helped us in a perverse way; most firewalls can do address translation and
avoid the need to renumber many internal addresses in a provider change.

Multihoming really doesn't yet have a clean solution.  There are
workarounds and proposals, but no consensus.  Dealing with multihoming,
however, has other aspects for the small ISP.

Using a personal example, I was very happy, when I had a quadruple heart
bypass, that my surgeon did several per week.  One's chances of survival
are much better in a place that does such operations frequently.

And so it is with operational BGP.  There are nuances to BGP configuration
and troubleshooting, especially in multihoming, that are hard to work with
if you don't work with them frequently, follow the appropriate intercarrier
operational mailing lists, etc.  Small providers usually can't justify
staff that has this specialized expertise, and IMHO are better off
outsourcing this to an experienced carrier until they grow to a size when
they can have in-house experts.  It usually works out that they can afford
such at roughly the same time they can justify provider-independent space
from the registries under the current guidelines in
http://ds.internic.net/rfc/rfc2050.txt



Howard Berkowitz
PSC International



More information about the Naipr mailing list