[arin-ppml] ULA-C and reverse DNS

michael.dillon at bt.com michael.dillon at bt.com
Mon Mar 22 15:30:30 EDT 2010


> > There is a natural limit on the number of ULA-C prefixes that an 
> > enterprise can get. If they only want to route locally in 
> some lab or 
> > local infrastructure, then they can get a ULA-C block. 
> Later, if what 
> > they have built becomes valuable to the enterprise, they can route 
> > that ULA-C block enterprise wide with confidence that it 
> won't break 
> > anything. But, the new block will not function enterprise 
> wide unless 
> > they can convice the IT admins to unblock that network in their 
> > firewall ACLs. It is common for there to be multiple layers of 
> > firewalls internal to an enterprise and the policies are roughly to 
> > block all traffic that is not known and registered in their IT 
> > registry.
> >
> How does that pose a limit on the number of blocks they get?

Because it takes effort to make the ULA-C block usable in 
the enterprise. That effort is the limiting factor.

> The process you have described allows a very large enterprise 
> to get a ULA-C block for a lab, use it, tear it down, forget 
> they ever had it and apply for another one 3 months later.  

Not if the RIR policy has some restrictions around that. Things
like renewal payments (or even just process) and membership in
good standing as a requirement for 2nd and subsequent ULA-C allocations.

> Lather, rinse, repeat until you actually do manage to burn 40 
> bits worth of address space.

I think you are missing the magnitude of the space available. If
it is only done every 3 months, then it will take a long long time
to burn through the space. Problems that develop slowly like that
are prime targets for new policy development.

> There is nothing in your proposal to prevent failure to 
> return unused ULA-C and nothing to prevent merely applying 
> for more instead of reusing what you already have.

That's because I haven't made a formal proposal for the global
RIR policy. Your are shooting at ghosts and phantoms. This is
just a discussion of some ideas to scope out the thing before
writing another draft, and an RIR policy proposal.

> Given our experiences with the IPv4 swamp, I'm inclined to 
> believe that such a system is not in the best interests of 
> the internet community and does not represent good 
> stewardship of the address space.

Am I confused here? Didn't our experiences with the swamp give
us VLSM and CIDR? These are good things. As is the whole ARIN
policy process which also was driven by experiences in the 
swamp. I remember when I could ask for a /25 allocation, and
get two adjacent but non-aggregatable swamp /24 blocks instead.
That was due to lack of process, and lack of oversight. The swamp
was a good thing. It also drove vendors to better algorithms in
the routers (Patricia tries?) and better hardware (TCAMs).

--Michael Dillon



More information about the ARIN-PPML mailing list