[arin-ppml] IPv6 Allocation Planning
owen at delong.com
Mon Aug 9 19:14:03 EDT 2010
On Aug 9, 2010, at 2:38 PM, William Herrin wrote:
> On Mon, Aug 9, 2010 at 4:21 PM, Leo Vegoda <leo.vegoda at icann.org> wrote:
>> On 9 Aug 2010, at 2:34, Owen DeLong wrote:
>>> This is an attempt to head off prefix-growth by allowing ISPs to do planning
>>> if they wish.
>> Why are ISPs not able to plan ahead at the moment?
> You're kidding, right?
> The current v6 dogma is that we're going to provide ISPs with exactly
> one allocation to the maximum extent possible, so we want to get that
> one right and/or include reserve slack surrounding the allocation so
> that the netmask expands. That's why we haven't organized things as a
> slow start.
Correct. I find it very interesting that out of one side of your mouth, you talk
about concern for routing table growth, yet, out of the other side, you say
we should return IPv6 back to slow-start so as to maximize the need to
make additional allocations to ISPs as they grow.
> One problem, of course, is that ISPs are used to planning address
> consumption on 6 and 12 month scales, not decades. They have no
> practical experience to guide them with longer range planning.
While this is true, it's a relatively minor problem of education.
Further, decades isn't a fair characterization of my proposal which
stated a 5 year time horizon as a straw-man and asked for input
on what people felt would be the best value. So far, the only other
suggestion I've received was 3 years.
> Making matters worse, v6 allocation and v4 allocation have a
> fundamentally different basis. V4 allocation is host-centric: you
> assign a /32 to a host. V6 allocation is LAN-centric: you assign /64's
> to a LAN. ISPs have experience counting hosts. Counting lans is a
> little different; it confuses the numbers.
Again, not a particularly hard problem to solve with a small amount of
If anything, counting "network segments" (It isn't really counting
LANs, this is another mischaracterization) is, if anything, a whole
lot easier than counting hosts. You don't need to worry about how
many things are in a given segment, just how many segments you
need. It's not like you didn't have to count those before, you just
had to take the additional step of sizing them to barely fit the number
> More abstractly speaking, the history of long-range planning in
> general is littered with more failure than success. And the successes
> tend to focus more on positioning the entity to right-size rather than
> pre-determining what the right size is.
Largely because attempts to right-size were pinned against a need to
conserve addresses. As things stand, moving that perception forward
into IPv6 is, in my opinion, the greatest danger to good aggregation.
> And lest we forget: IPv6 is not currently a moneymaker nor anticipated
> to soon be a moneymaker, so the funding to support any sort of long
> range planning simply isn't there.
Again, this is an area where we disagree. I know several organizations
with large networks that are most definitely engaged in long-range
planning for IPv6. If we make that long-range planning simpler (such
as what I have attempted to propose):
1. Determine number of end sites served by largest POP
2. Round that number up to a nibble boundary.
3. Determine the number of POPs.
4. Again, round that number up to a nibble boundary.
6. Get a prefix large enough to contain the required number of POPs
each of which has enough /48s for the number of end-sites
determined in step 2.
Perhaps this simplicity got lost in the math detail that I included
in the original proposal. Here's an example exercise I hope will
You have 200 end sites served by your largest pop. Rounding
up to a nibble boundary, we get to 8-bits (256 end sites per POP).
You have 100 POPs now and expect that to triple over the next
5 years. 500 rounded up to a nibble boundary is 4096 (12 bits).
We need a total of 20 bits of prefix to number all of our POPs, giving
256 segments to each POP. Our own infrastructure fits well within
the round-up at both levels. To provide /48s to each end-site, we
will need give a /40 to each POP. To have 12 bits worth of /40s,
we will need to get a /28 from the RIR.
Here's a convenient table of nibble boundaries to make rounding
Min Max Bits Units represented
1 12 4 16
13 240 8 256
241 3840 12 4,096
3,841 61,440 16 65,536
61,441 983,040 20 1,048,576
983,041 15,728,640 24 16,777,216
I doubt that any real deployment is likely to get beyond the 16 bit
row on this table. (Remember, you do two look ups on the table
and the add the number of bits required together, no real
As you can see from the table, this really doesn't have to be hard and
will create a relatively small number of prefix sizes while still allowing
allocations to be liberal without being needlessly excessive.
So, for example, a pretty large provider with, say, 10,000 end sites
served by their largest POPs and, say, 1,000 POPs expects to
have 10% growth per year for the next 5 years.
10,000 end sites/POP grows to 16,105.
1,000 POPs grows to 1,611 POPs.
Using the table above, we find that we should plan on requesting
16 bits for each POP and 12 bits to represent the number of POPs.
That's 28 bits needed. 48-28 is 20. We should request a /20.
Quite simple, really. No actual difficult math once the table is
If each RIR gives two /20s to EVERY active autonomous system
currently on the internet, we would still only consume 161 /12s
from 2000::/3. As such, I would argue this is still a relatively
conservative consumption of the IP address space.
To prevent this from impacting the routing system, yes, providers
should be discouraged from disaggregating this space. I believe
that the community is, generally, quite capable of doing this through
It hasn't been possible in IPv4 because it wasn't possible to have
this kind of allocation policy to reduce the number of opportunities
to legitimately advertise disparate prefixes.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ARIN-PPML