[arin-ppml] Routing Research Group is about to decide its scalable routing recommendation

Robin Whittle rw at firstpr.com.au
Sat Dec 19 02:34:26 EST 2009


Hi Leo,

Thanks for the continuing conversation.  You wrote:

>> OK.  While there may be ways of marginally improving the operation of
>> the DFZ, including by improvements to the BGP protocol, the goal of
>> scalable routing is far beyond the modest improvements these might bring.
> 
> As I outlined in another message, there are really three visions of the
> future.  
> 
> A) PA only, you will be aggregated.
> 
> B) PI for everyone, take your addresses with you for every end user.
> 
> C) Extending the current scheme of figuring out who is "worth" for PI.


> If we could do A, BGP is fine.  

Yes.  But then there's no portability or multihoming unless everyone
uses modified host stacks and apps according to a core-edge
elimination scheme.

Core-edge elimination schemes let applications see a single IP
address (or whatever they use to identify hosts, uniquely, within the
Internet), no matter what physical connection and physical address
the host currently has.  Then all hosts in end-user networks are on
physical addresses of PA prefixes temporarily given to them by their
current choice of ISP.  But they retain their identity when they use
another physical address.  That is portability.  By making them keep
their identity and retain session continuity when they get a
different physical address, this is multihoming - also with the
possibility of doing inbound traffic engineering (TE).  This has some
similarity with current approaches to mobility.  However, in these
core-edge elimination schemes, ideally, the identifiers are drawn
from a different namespace than the locators (physical addresses) and
the applications somehow never need to worry about the locators.

With a core-edge elimination scheme, all identifiers are "PI" - you
keep them no matter what ISP you use.  Meanwhile, the DFZ and the
locators (conventional IP addresses of today, for instance) keeps on
running BGP and the only routes advertised in the DFZ are those for
ISPs.  This number is assumed not to create too much of a scaling
problem - which I think is reasonable.



> For B, BGP is a lost cause.

BGP on its own, with no additions, won't work - I agree.

However, core-edge separation techniques keep the DFZ (core) to a
small number of routes, while supporting a much larger number of
end-user prefixes which are PI.


> For C, it depends entirely on where you want to put the curve.

The purpose of the RRG scalable routing project (as I understand it -
I am a participant, not speaking for the RRG itself in any way) is to
enable pretty much anyone who wants or needs multihoming, portability
and/or inbound TE to have it - in a scalable fashion.

"Portability" was not a popular term in the RRG and it still makes
some people's skin crawl . . .  What is really needed is freedom of
choice between ISPs.  In theory you could do this with PA space and
some automated approach to renumbering.  However, even with IPv6
which was meant to allow this, there is no way this is going to be
practical.  For one thing, it is complex and probably impossible to
identify and change every place in a network where IP addresses
appear.  Secondly, it would be close to impossible to reliably test
the switch-over without actually doing it - and then finding out the
hard way it doesn't work.

Even if this was possible, it doesn't cover the use of the physical
addresses in configuration files and application code in other
systems.  VPNs are one example.  DNS zone files are another.  And to
achieve multihoming, you would need all sessions to continue the
moment the addresses changed.  HIP and SCTP can do this, but that
involves stack and host changes, and in the case of HIP, IPv6.

So "Portability" is not officially what the RRG was trying to
achieve.  Its only due to the impossibility of making renumbering
work reliably that "Portability" becomes the only way to ensure
freedom of choice between ISPs.


> I put your work in the "B" bucket, and in that context your statement is
> right.  

Yes, if you mean "BGP on its own can't cope, so we need to add
something to the routing and addressing system to achieve our goals,
while keeping the DFZ routing table within reasonable bounds."


> If we can't do that though, the "C" bucket may be the future, in
> which case I'm not sure your statement is accurate.

I am sure we can achieve scalable routing with a core-edge separation
scheme.  C is the current situation and the idea is that we won't
need to be so fussy once there is a scalable form of PI space.


>> I think it should extend to SOHO "networks" even though they might
>> have most of their hosts behind NAT.  For instance, if I am running a
>> business and am concerned about the reliability of my DSL line, I
>> should be able to get a 3G or WiMAX service as a backup, and use my
>> address space on either.  That is a cheap backup arrangement - since
>> there's no need for new fibre or wires.  My address space may only be
>> a single IPv4 address, but if I need it for mail, HTTPS e-commerce
>> transactions, VoIP etc. I would want it to keep working without a
>> hitch if the DSL line or its ISP was not working.
> 
> I really like this concept, but somewhere my statistics professor is
> wispering in my ear.
> 
> It's not entirely clear to me that from a measured uptime perspective
> that a future with a SOHO network with the ability to do the type
> of "backup" you describe is more reliable than a  SOHO network with a
> single upstream.  That is, the ability to have the backup just work
> introduces complexity with its own failure modes, and that may offset
> the redundancy.

I agree - extra complexity could result in overall less reliability
than reliance on a single system which is in fact highly reliable.
I don't clearly remember a time in the last 3.5 years when my DSL
service (Internode, via a Telstra DSLAM and 4km line) has been down.

I was trying to estimate an upper bound for how many end-user
networks the scalable routing solution would need to work with if
there was no mobility.

I think the number is probably going to be a lot lower, such as a
million or so in the long-term.

Nor am I arguing that mobility will cover 10 billion devices.  More
likely a few billion.  I was just trying to find an upper bound,
because these various core-edge separation schemes are being
evaluated, in part, on how well they scale to some large number of
end-user network prefixes.

> The average end user does not understand this concept though, so even if
> I am right they may demand reundancy, even if it lowers their uptime
> overall.

Indeed - it would be possible to market a scare campaign of an entire
business physically hanging off a single fibre, or a single twisted
pair of 0.7mm copper wires, and spook millions of people into getting
a backup link and multihomable address space, even if the overall
outcome was more downtime.


>> Can you point to any proposal to replace BGP which could be
>> introduced in a way which provided significant immediate benefits to
>> the early adoptors (not just dependent on how many others adopt it,
>> which initially is very low) while also working with the existing system?
> 
> Can I point to a proposal, no.  Can I imagine a new routing protocol,
> which could be run on a single AS, and redistributed in/out of BGP
> at the boarders during transition?  Sure.

If you can think of a way of solving the problem by progressively
replacing BGP on a voluntary basis, then 22 December is the deadline
for registering RRG proposals.


>> In the future, if mobility is developed as it should be, there will
>> be billions of devices, typically connected by a flaky long-latency
>> 3G link.  So to say the whole Internet must follow your principle:
> 
> I'm cherry picking a statement here, but I want to pick on the
> mobility technical qualifications.  A "flaky long-latency 3G link"
> seems like a poor point to start.  Given where wireless is and the
> future this seems to me a bit like designing BGP in the days of
> 9600 Baud Modems and assuming 10G wavelengths would have the same
> reliability factors.
> 
> I feel like before any of the things discussed are out in production 5,
> 6, or 7G links will be seen as reliable and "low latency", but perhaps
> I'm optimistic.
> 
> Again, thanks for providing the information in this forum.

Thanks.  Current wireless communication technologies are pretty close
to the physical limits imposed by limited radio spectrum.  Unless we
put a base-station every 20 metres, then each device only has a
limited amount of spectrum to use.  The current 3G techniques and the
upcoming 4G, which use OFDM to squeeze every last drop of bandwidth
from the spectrum, are about as good as it gets.

Its not like semiconductors or fibre, which can be pushed to
extraordinary lengths, since they are dedicated to a single user.
Wireless depends on limited spectrum shared over some number of
devices, in a noisy environment, in a given area, with crosstalk from
similar systems in other areas.  The devices are moving, so there is
Doppler shift.  They are moving into the areas best served by other
base-stations too, and there needs to be exploration of all the
conditions and a quick handover.

OFDM achieves high throughput by being coupled with forward error
correction, and then scattering the resulting bits over time and
frequency, so a single frequency interferer or single time glitch at
all frequencies doesn't clobber enough bits to stop the FEC from
fully reconstructing the original data.  So these high performance
systems involve unavoidable latency, in order to operate reliably in
the noisy environment.

I am keen to get some responses to my critique of putting more
routing and addressing responsibilities into all hosts:

  http://www.firstpr.com.au/ip/ivip/RRG-2009/host-responsibilities/

  - Robin




More information about the ARIN-PPML mailing list