[ppml] HD Ratio and scaling issues (was Re: Proposed Policy...)

Owen DeLong owen at delong.com
Wed Feb 23 06:06:04 EST 2005

--On Wednesday, February 23, 2005 9:49 +0000 Michael.Dillon at radianz.com 

>> That's true, but, I'm not at this point convinced that HD ratio
>> is a meaningful solution to that problem.  I have long advocated
>> that there are cases in which an LIR should be able to treat
>> nodes as if they were individual sub-LIRs and justify space to
>> ARIN on that basis.  When I was at Exodus, this was bad enough
>> that we finally succumbed and paid multiple fees to ARIN to make
>> each of our regions a separate LIR instead of being a single
>> organization with a single allocation criteria.  However, I
>> don't believe HD really addresses this issue.  I do believe
>> that it rewards pathological inefficiencies as the one you describe
>> above.
> First, my comments were not intended to be an explanation
> of why we need HD ratio for IPv4. I mainly tried to address
> one question from Charles to illustrate the effects of
> hierarchy. You seem to think that this illustrates pathological
> inefficiency and prefer to see large numbers of routes
> instead. But since we are talking about scaling issues here,
> having a large number of internal routes is not necessarily
> prudent or efficient in a large network.
OK... Well, since his question was related to your HD ratio proposal,
I assumed your answer was similarly related.  My mistake.
I proposed large numbers of IGP routes (625 is a large number
of routes?) as one easy alternative at the scale described.
Realistically, if you scale this much larger than what you
described, most of the described inefficiencies start to drop
into the noise threshold, so, that is why I considered it
a pathological example.  Address consumption vs. <1K route
delta in an IGP does not seem like a wise tradeoff to me.
Below, I propose a way to do this without significant (if any)
routing table growth.

> I would really like to see you describe in more
> detail your definition of pathological inefficiency
> contrasted with ordinary inefficiency.
In my experience, running several large networks, allocating /29s
to lots of networks with exactly 5 hosts (pretty close to the worst-
case you can justify, 3 is about as bad a scenario as you can concoct,
but, 5 is pretty bad) is pretty unusual.  In the ordinary world,
these would be simply /29s assigned to each customer, and, therefore,
as long as the customer had need of at least 3 host addresses, you
would achieve 100% utilization credit for each /29 is far more
normal.  Sure, if you want to aggregate at the POP level, you still
have (as I pointed out), 3 more-specific prefixes per POP that you
are not using, but, as I also pointed out, with a minimal penalty
in number of routes (with 25 pops as in your example, a maximum of
75 additional prefixes... hardly a scaling issue for any modern IGP
unless you are really determined to run RIP), I just don't see it
as a likely normal scenario.  Plus, if you are only running one product
in your POP and it's this particularly inefficient one (from an
address consumption perspective), I'd be surprised in a real-world
scenario.  What is far less pathological is that you could probably
find some other use within that POP for at least one of those
other prefixes (which is enough to solve the problem if you can
use the /28 or /27).

However, even if your true case is actually as pathological as you state,
I would propose that you should aggregate your POPs in topological
groups of approximately 4+ POPs per group.  That way, you could make
much more efficient use of allocations to those POPS and still come
out with fewer prefixes in your routing tables outside of each POP group.

Point is, the case you described was relatively worst case and there are
alternatives even in that case.

> Also, I'd like to understand what were the issues that
> you ran into at Exodus. Were they also scaling issues
> in which the ARIN policies simply don't work for larger
> networks?
Not exactly.  The problems at Exodus were also somewhat pathological,
but, basically, each Exodus IDC had lots of direct external peering.
As such, each IDC was practically its own standalone ISP separate from
the rest of Exodus.  Sure, we had peering between our IDCs as well,
and a backbone, but most traffic was IDC<-->External direct.

For a variety of efficiencies, we liked to allocate at least a
/20 (and usually a /18) to each new IDC as it was coming on line.
Due to a variety of issues, we'd have circumstances where we had
to provide customer assignments for a new IDC 3-6 months before
most of the tenants were able to move into the IDC, or, even
general sales of IDC space in the facility began.  As a result,
we often found ourselves with 3-month allocations where we had
sub-allocated all of our available space, but, only assigned
5-10% of some of our sub-allocations.  We knew that in a month
or two, we'd have 80% utilization in those spaces, but, they
were enough to drag our overall numbers down to where ARIN didn't
want to give us new space for other new IDCs coming on line.

I believe that the current 6 month rule would compensate for
a majority of these issues.  I'm not convinced that the HD
ratio would help at all.


If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 186 bytes
Desc: not available
URL: <https://lists.arin.net/pipermail/arin-ppml/attachments/20050223/22489886/attachment-0001.sig>

More information about the ARIN-PPML mailing list