[arin-ppml] CGN multiplier was: RE: Input on an article by Geoff Huston (potentially/myopically off-topic addendum)

Owen DeLong owen at delong.com
Fri Sep 16 00:47:26 EDT 2011

Sent from my iPad

On Sep 15, 2011, at 16:36, Chris Engel <cengel at conxeo.com> wrote:

>>> Sure, I'll give you a number of examples that are important to me (i.e. We
>> actually use NAT to perform these functions today in our Enterprise)....
>>> 1) I want to be able to easily and quickly switch around the hardware that
>> provides a particular service without changing anything about the external
>> advertisement of that service on the network level and I don't want to have
>> to worry about doing any renumbering or re-architecting on my internal
>> infrastructure when I do so. I even want to be able to take 1 device that
>> provided 2 services (say SMTP & HTTP) and split them into 2 separate devices
>> without changing the advertisement of those services.
>> Assign each service a static host address. Move the static host address
>> around with the service. This is separate from the machine address which
>> would stay with the machine.
> Although that method CAN work, it has some downsides that I'd rather avoid...
>         - More IP's to manage (one for each individual service) and more places to manage them (the individual devices rather then the NAT enabled FW) creates greater management overhead and introduces greater possibility for error.

My experience contradicts your statement. IP addresses are just as easy to manage as port numbers and have the advantage that you don't need to inflict weird port maps on your external clients. If you're not doing weird port tricks, then, you're managing just as many IP addresses on the NAT box.

Moving service-based addresses around on servers is pretty trivial and doesn't require coordination between the server team and the networking team that manages the firewall to update the NAT mappings.

>          - When I move the service IP from one part of my network to another then I need to make sure the routing to the chosen boundary device moves with it. It adds complexity to moving services from one device to another and makes for a messier internal structure to manage. With keeping a static internal IP per device....all I need to do on the network level is make sure I can get connectivity between it and the boundary(s) when I first configure the device....not each time I put/change a service on it....I don't need to worry about doing anything with routing internally...just on the boundary device.
As I said, if you're scattering services around your internal network, that's true. In my experience, most administrators place such services and the hosts that provide them into a common DMZ-like area for a variety of reasons outside of the simple convenience of address portability. Obviously, if you don't do this, you've chosen to complicate your life in other ways and you are correct that it would add complexity to my proposed solution as well.

However, this is an unnecessary self-inflicted injury as far as I am concerned.

>      - It makes it a little more complex for external entities to deal with. Rather then needing to know, I goto X for all services, now they need to remember I goto X for this service, Y for this service, Z for this service, etc.

How is I go to X, Y, and Z somehow less complex than I go to X:a, X:b, or X:c?

Besides, wouldn't you use a service host name in DNS anyway and send them to www.foo.com, smtp.foo.com, etc.?

> Although your proposed solution IS workable....it adds complexity and workload for me....and doesn't really buy me anything of value.

I would argue that properly implemented it reduces complexity and workload vs. your chosen set of self-inflicted injuries, but, I suppose that could amount to a matter of personal preference.

>> Actually you only think it fails closed and it actually does if you get lucky.
>> Often it fails open in a number of different and interesting ways that go
>> undetected. With a straight stateful inspection firewall, you at least know
>> that you need to validate your rules. There are a number of tools for doing
>> so.
>> No matter how much you would like to think that NAT compensates for
>> incompetent administration, it really doesn't.
> I'll just disagree with you here. In my experience, in the vast majority of cases when MANY to ONE NAT fails...it fails closed. If the NAT device doesn't have an entry in it's state table to tell it where to route a packet...how does it route the packet? I'm not saying that NAT is a substitute for proper administration, statefull firewalls or auditing your packet rules....but it adds one more layer that has to fail/be bypassed in order for the bad guys to get in. That's exactly why just about every security auditor I have encountered considers it a COMPENSATING control (exact terminology used for security audits). It's like a deadbolt on the door, it's not going to protect against someone coming in the window or through the wall...nor someone tricking you to open the door for them....but it DOES help make the door a bit more secure.

In the vast majority of cases when NAT fails, it fails apparently closed. It is not necessarily actually closed. The fact that ti blocks the connections you intended does not necessarily mean that it blocks all of the connections you intend to prevent.

The vast majority of security auditors that I have encountered spew this same rhetoric, but, when pressed on the actual realities end up admitting that it's really equivalent to adding a screen door to a bank vault. Anyone who brought the tools to break into the vault will have no problem cutting down the screen door to get to the vault.

One could argue that a locked screen door is somehow a compensating control if someone forgot to close the bank vault. Most physical security experts would laugh at this.

Most of the security professionals I know that do this at a serious level refer to NAT as antithetical to, not additive to security.

>> Use privacy addresses. These are on by default in Windows 7+ and MacOS X
>> Lion. They can easily be enabled on Linux. In IPv6, this is essentially
>> automatic. If you have the host do a dynDNS update after it generates its
>> privacy address (also straight forward), you get the same log functionality
>> with the added advantage that it does not depend on tight clock
>> synchronization to come up with the correct answer.
> Privacy addresses are a lousy answer as far as I'm concerned. I don't want my internal addresses obfuscated from ME (i.e. changing all the time). I want to be able to look at an internal address and know what it is with a fair degree of confidence..... today, tomorrow, next week, next year. I can do this with RFC 1918 space and NAT very easily today. I can either use static IP address assignments directly, DHCP reservations or simply just long term leases. In order for privacy addresses to have real value in confounding external entities....they'd have to change rapidly enough to make them much more difficult to track internally. for the most part I don't really want to have to worry about dynDNS logs in order to track something at the IP level. With my current setup I really don't have to worry too much about correlating different logs (i.e. NAT/FW, DHCP, dynDNS, etc) I can generally just look in the NAT/FW log and know what device I'm dealing with because the internal IP pretty much doesn't change. If there is any doubt, I can confirm with a quick check of the DHCP log to see who was holding that address at the time in question. It strikes me that with privacy addresses, I've either got alot more leg work to do in order to track something....or it's not doing a very good job of providing privacy.

I'm not a fan of privacy addressing, either. However, I'm fine with a static address or DHCP-assigned address which does not contain the MAC address, but, is directly identifiable to the outside world, though not useful for tracking those same hosts when they go to other networks.

Either is a viable option in IPv6.

>> Same is true if you use PI space in IPv6.
> I'll grant this one. Although I haven't had any practical experience in obtaining/using PI space yet. It would seem to offer the advantage of not running into numbering conflicts that can happen with RFC 1918 space.

> Again your solutions are not unworkable....and I appreciate them....but NAT doesn't come with the same downsides for me....and the downsides it does come with aren't really downsides for me, because I don't want the functionality it prevents in the first place.

No, NAT comes with entirely different downsides, which, IMHO far outweigh the minor tradeoffs in the solutions I have proposed. I have significant experience with PI space. My environment has been running on the same PI space for quite some time (look up the original issue date of if you like). In addition, I have implemented solutions like this for a number of my SMB consulting clients. 90+% of them have never called me for a problem with their network connectivity or any need to modify the configuration unless they were changing ISPs.

Frankly, it has been successful enough that I have considered making a business model of providing tunnel transit from a couple of data centers to such clients by maintaining a couple of capable routers in colos, but, it's just not really how I want to spend my time.

> If I HAD to goto IPv6 with these solutions I could...but I feel I'd be giving up things I don't want to have to give up. The only thing IPv6 buys me is more space....which I really don't need at this point. More to the point, implementing the sort of NAT in IPv6 that I currently have in IPv4 doesn't break anything for me that isn't currently broken under my IPv4 implementation..... so why is it a problem me wanting it? My first rule is that with complex systems is that when you have a system that's working pretty much the way you want it to....but you need 1 piece of functionality....change as little as you can about the system to get that 1 piece of functionality working. So if all I need is more address space.....why should I have to change how all these other functions work....that are entirely unrelated to the size of the address pool?

IPv6 buys you the ability to talk to a continually increasing fraction of the internet. Yes, it is currently a small fraction and not growing very rapidly. However, that is changing and it will start changing rapidly in the very near future.

> Why would I want to change the method I use for tracking, the method I use for compensating security controls, the method I use for Privacy, the method I use for advertising services....if all that is working just fine for me right now...and the only thing I want is another 20 IP addresses?

Because at a certain point, it's time to clean up the toxic waste dump in the back yard and sooner is generally better than later?

> That's my core problem with IPv6 right now. Rather then solve the one thing that pretty much everyone agrees is a problem....limited address space....it does that and forces you to do 20 other things differently as well. If they had just doubled the number of octets in IPv4 and called it a day...this debate would have been over 5 years ago....and we'd all be sitting on that internet right now.  

Actually, most of the things you are complaining about them changing are things that were inflicted on IPv4 after IPv6 development was well underway in order to stretch IPv4's life span to allow the internet to survive in a degraded form until IPv6 could be deployed. Unfortunately, we piled hack upon hack until we achieved a combination of limited functionality and lowered expectations that was acceptable to a large enough portion of the user community that we declared victory. Add 20 years of network engineers learning everything they know in such an environment and never encountering real internet access, but, instead believing that this was as good as it gets (and in some cases, even being duped into believing that this is somehow better than actual internet access).

So, in fact, they quadrupled the size of the address, added some autoconfiguration capabilities that didn't exist in IPv4, added IPSEC in a clean and consistent manner, applied some OSPF lessons learned, and cleaned up Multicast using some IPv4 lessons learned.

Otherwise, they (we to some extent, actually) really don't force you to do anything differently than the way IPv4 worked when IPv6 began development in earnest and choose not to implement what was, at the time it was done to IPv4, considered an ugly hack and a necessary evil to allow the protocol to survive long enough to be replaced with something clean.


More information about the ARIN-PPML mailing list