From chair at ietf.org Fri Sep 1 15:40:48 2000 From: chair at ietf.org (Fred Baker) Date: Fri, 01 Sep 2000 12:40:48 -0700 Subject: IAB/IESG Recommendations on IPv6 Address Delegation Message-ID: <4.3.2.7.2.20000901123010.0243af00@flipper.cisco.com> Folks: The RIR community asked the IETF community for advice regarding the assignment of IPv6 prefixes to service providers and edge networks, both with a view to topological address assignment and to multihomed networks. The IPv6 Directorate prepared a statement, which the IESG and IAB have reviewed and approved. This is attached. I trust that this answers the questions you asked, and serves as a basis for IPv6 deployment in the near term. If you have questions or issues concerning it, I would suggest that you address them to the IPv6 Directorate copying the IESG and IAB. We intend to publish an Informational RFC in the near future documenting the contents of this note. Fred Baker ----------------------------------------------------------------------- IAB/IESG Recommendations on IPv6 Address Allocations September 1, 2000 Introduction During a discussion between IETF and RIR experts at the Adelaide IETF, a suggestion was made that it might be appropriate to allocate /56 prefixes instead of /48 for homes and small businesses. However, subsequent analysis has revealed significant advantages in using /48 uniformly. This note is an update following further discussions at the Pittsburgh IETF. This document was developed by the IPv6 Directorate, IAB and IESG, and is a recommendation from the IAB and IESG to the RIRs. Background The technical principles that apply to address allocation seek to balance healthy conservation practices and wisdom with a certain ease of access. On the one hand, when managing the use of a potentially limited resource, one must conserve wisely to prevent exhaustion within an expected lifetime. On the other hand, the IPv6 address space is in no sense as precious a resource as the IPv4 address space, and unwarranted conservatism acts as a disincentive in a marketplace already dampened by other factors. So from a market development perspective, we would like to see it be very easy for a user or an ISP to obtain as many IPv6 addresses as they really need without a prospect of immediate renumbering or of scaling inefficiencies. The IETF makes no comment on business issues or relationships. However, in general, we observe that technical delegation policy can have strong business impacts. A strong requirement of the address delegation plan is that it not be predicated on or unduly bias business relationships or models. The IPv6 address, as currently defined, consists of 64 bits of "network number" and 64 bits of "host number". The technical reasons for this are several. The requirements for IPv6 agreed to in 1993 included a plan to be able to address approximately 2^40 networks and 2^50 hosts; the 64/64 split effectively accomplishes this. Procedures used in host address assignment, such as the router advertisement of a network's prefix to hosts [RFC 2462], which in turn place a locally unique number in the host portion, depend on this split. Examples of obvious choices of host number (IEEE Mac Address, E.164 number, E.214 IMSI, etc) suggest that no assumption should be made that bits may be stolen from that range for subnet numbering; current generation MAC layers and E.164 numbers specify up to 64 bit objects. Therefore, subnet numbers must be assumed to come from the network part. This is not to preclude routing protocols such as IS-IS level 1 (intra-area) routing, which routes individual host addresses, but says that it may not be depended upon in the world outside that zone. The IETF has also gone to a great deal of effort to minimize the impacts of network renumbering. None-the-less, renumbering of IPv6 networks is neither invisible nor completely painless. Therefore, renumbering should be considered an acceptable event, but to be avoided if reasonably avoidable. The IETF's IPNG working group has recommended that the address block given to a single edge network which may be recursively subnetted be a 48 bit prefix. This gives each such network 2^16 subnet numbers to use in routing, and a very large number of unique host numbers within each network. This is deemed to be large enough for most enterprises, and to leave plenty of room for delegation of address blocks to aggregating entities. It is not obvious, however, that all edge networks are likely to be recursively subnetted; an individual PC in a home, or a single cell in a mobile telephone network, for example, may or may not be further subnetted (depending whether they are acting as, e.g., gateways to personal, home, or vehicular networks). When a network number is delegated to a place that will not require subnetting, therefore, it might be acceptable for an ISP to give a single 64 bit prefix - perhaps shared among the dial-in connections to the same ISP router. However this decision may be taken in the knowledge that there is objectively no shortage of /48s, and the expectation that personal, home and vehicle networks will become the norm. Indeed, it is widely expected that all IPv6 subscribers, whether domestic (homes), mobile (vehicles or individuals), or enterprises of any size, will eventually possess multiple always-on hosts, at least one subnet with the potential for additional subnetting, and therefore some internal routing capability. Note that in the mobile environment, the device connecting a mobile site to the network may in fact be a third generation cellular telephone. In other words the subscriber allocation unit is not always a host; it is always potentially a site. Address Delegation Recommendations The RIR communities, with the IAB, have determined that reasonable address prefixes delegated to service providers for initial allocations should be on the order of 29 to 35 bits in length, giving individual delegations support for 2^13 (8K) to 2^19 (512K) subscriber networks. Allocations are to be given in a manner such that an initial prefix may be subsumed by subsequent larger allocations without forcing existing subscriber networks to renumber. We concur that this meets the technical requirement for manageable and scalable backbone routing while simultaneously allowing for managed growth of individual delegations. The same type of rule could be used in the allocation of addresses in edge networks; if there is doubt whether an edge network will in turn be subnetted, the edge network might be encouraged to allocate the first 64 bit prefix out of a block of 8..256, preserving room for growth of that allocation without renumbering up to a point. However, for the reasons described below, we recommend use of a fixed boundary at /48 for all subscribers except the very largest (who could receive multiple /48's), and those clearly transient or otherwise have no interest in subnetting (who could receive a /64). Note that there seems to be little benefit in not giving a /48 if future growth is anticipated. In the following, we give the arguments for a uniform use of /48 and then demonstrate that it is entirely compatible with responsible stewardship of the total IPv6 address space. The arguments for the fixed boundary are: - only by having an ISP-independent boundary can we guarantee that a change of ISP will not require a costly internal restructuring or consolidation of subnets. - to enable straightforward site renumbering, i.e., when a site renumbers from one prefix to another, the whole process, including parallel running of the two prefixes, would be greatly complicated if the prefixes had different lengths (depending of course on the size and complexity of the site). - there are various possible approaches to multihoming for IPv6 sites, including the techniques already used for IPv4 multihoming. The main open issue is finding solutions that scale massively without unduly damaging route aggregation and/or optimal route selection. Much more work remains to be done in this area, but it seems likely that several approaches will be deployed in practice, each with their own advantages and disadvantages. Some (but not all) will work better with a fixed prefix boundary. (Multihoming is discussed in more detail below.) - to allow easy growth of the subscribers' networks -- no need to keep going back to ISPs for more space (except for that relatively small number of subscribers for which a /48 is not enough). - remove the burden from the ISPs and registries of judging sites' needs for address space, unless the site requests more space than a /48, with several advantages: - ISPs no longer need to ask for details of their customers' network architecture and growth plans - ISPs and registries no longer have to judge rates of address consumption by customer type - registry operations will be made more efficient by reducing the need for evaluations and judgements - address space will no longer be a precious resource for customers, removing the major incentive for subscribers to install v6/v6 NATs, which would defeat the ability of IPv6 to restore address transparency. - to allow the site to maintain a single reverse-DNS zone covering all prefixes. - If and only if a site can use the same subnetting structure under each of its prefixes, then it can use the same zone file for the address-to-name mapping of all of them. And, using the conventions of RFC 2874, it can roll the reverse mapping data into the "forward" (name-keyed) zone. Specific advantages of the fixed boundary being at /48 include - to leave open the technical option of retro-fitting the GSE (Global, Site and End-System Designator, a.k.a "8+8") proposal for separating locators and identifiers, which assumes a fixed boundary between global and site addressing at /48. Although the GSE technique was deferred a couple of years ago, it still has strong proponents. Also, the IRTF Namespace Research Group is actively looking into topics closely related to GSE. It is still possible that GSE or a derivative of GSE will be used with IPv6 in the future. - since the site local prefix is fec0::/48, global site prefixes of /48 will allow sites to easily maintain a simple 1 to 1 mapping between the global topology and the site local topology in the SLA field. - similarly, if the 6to4 proposal is standardized, migration from a 6to4 prefix, which is /48 by construction, to a native IPv6 prefix will be simplified if the native prefix is /48. Note that none of these reasons imply an expectation that homes, vehicles, etc. will intrinsically require 16 bits of subnet space. Conservation of Address Space The question naturally arises whether giving a /48 to every subscriber represents a profligate waste of address space. Objective analysis shows that this is not the case. A /48 prefix under the Aggregatable Global Unicast Address (TLA) format prefix actually contains 45 variable bits, i.e., the number of available prefixes is 2**45 or about 35 trillion (35,184,372,088,832). If we take the limiting case of assigning one prefix per human, then the utilization of the TLA space appears to be limited to approximately 0.03% on reasonable assumptions. More precisely, - RFC 1715 defines an "H ratio" based on experience in address space assignment in various networks. Applied to a 45 bit address space, and projecting a world population of 10.7 billion by 2050 (see http://www.popin.org/pop1998/), the required assignment efficiency is log_10(1.07*10^10) / 45 = 0.22. This is less than the efficiencies of telephone numbers and DECnetIV or IPv4 addresses shown in RFC 1715, i.e., gives no grounds for concern. - We are highly confident in the validity of this analysis, based on experience with IPv4 and several other address spaces, and on extremely ambitious scaling goals for the Internet amounting to an 80 bit address space *per person*. Even so, being acutely aware of the history of under-estimating demand, we have reserved more than 85% of the address space (i.e., the bulk of the space not under the Aggregatable Global Unicast Address (TLA) format prefix). Therefore, if the analysis does one day turn out to be wrong, our successors will still have the option of imposing much more restrictive allocation policies on the remaining 85%. - For transient use by non-routing hosts (e.g., for stand-alone dial-up users who prefer transient addresses for privacy reasons), a prefix of /64 might be OK. But a subscriber who wants "static" IPv6 address space, or who has or plans to have multiple subnets, ought to be provided with a /48, for the reasons given above, even if it is a transiently provided /48. To summarize, we argue that although careful stewardship of IPv6 address space is essential, this is completely compatible with the convenience and simplicity of a uniform prefix size for IPv6 sites of any size. The numbers are such that there seems to be no objective risk of running out of space, giving an unfair amount of space to early customers, or of getting back into the over-constrained IPv4 situation where address conservation and route aggregation damage each other. Multihoming Issues In the realm of multi-homed networks, the techniques used in IPv4 can all be applied, but they have known scaling problems. Specifically, if the same prefix is advertised by multiple ISPs, the routing tables will grow as a function of the number of multihomed sites. To go beyond this for IPv6, we only have initial proposals on the table at this time, and active work is under way in the IETF IPNG working group. Until existing or new proposals become more fully developed, existing techniques known to work in IPv4 will continue to be used in IPv6. Key characteristics of an ideal multi-homing proposal include (at minimum) that it provides routing connectivity to any multi-homed network globally, conserves address space, produces high quality routes at least as well as the single-homed host case previously discussed via any of the network's providers, enables a multi-homed network to connect to multiple ISPs, does not inherently bias routing to use any proper subset of those networks, does not unduly damage route aggregation, and scales to very large numbers of multi-homed networks. One class of solution being considered amounts to permanent parallel running of two (or more) prefixes per site. In the absence of a fixed prefix boundary, such a site might be required to have multiple different internal subnet numbering strategies, (one for each prefix length) or, if it only wanted one, be forced to use the most restrictive one as defined by the longest prefix it received from any of its ISPs. In this approach, a multi-homed network would have an address block from each of its upstream providers. Each host would either have exactly one address picked from the set of upstream providers, or one address per host from each of the upstream providers. The first case is essentially a variant on RFC 2260, with known scaling limits. In the second case (multiple addresses per host), if two multi-homed networks communicate, having respectively m and n upstream providers, then the one initiating the connection will select one address pair from the n*m potential address pairs to connect from and to, and in so doing will select the providers, and therefore the applicable route, for the life of the connection. Given that each path will have a different ambient bit rate, loss rate, and delay, if neither host is in possession of any routing or metric information, the initiating host has only a 1/(m*n) probability of selecting the optimal address pair. Work on better-than-random address selection is in progress in the IETF, but is incomplete. An existence proof exists in the existing IPv4 Internet that a network whose address is distinct from and globally advertised to all upstream providers permits the routing network to select a reasonably good path within the applicable policy. Present-day routing policies are not QoS policies but reachability policies, which means that they will not necessarily select the optimal delay, bit rate, or loss rate, but the route will be the best within the metrics that are indeed in use. One may therefore conclude that this would work correctly for IPv6 networks as well, apart from scaling issues. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Fred Baker | 519 Lado Drive IETF Chair | Santa Barbara California 93111 www.ietf.org | Desk: +1-408-526-4257 | Mobile: +1-805-886-3873 | FAX: +1-413-473-2403 From ebert311 at yahoo.com Tue Sep 5 17:54:46 2000 From: ebert311 at yahoo.com (Brian Lee) Date: Tue, 5 Sep 2000 14:54:46 -0700 (PDT) Subject: Search Engines Message-ID: <20000905215446.8689.qmail@web3002.mail.yahoo.com> Ok,I've heard about 18 different stories as to why we should or should not adopt the HTTP 1.1. The one think that I haven't heard anything about is search engines. Will search engines also change? If they will, we are in business!!! Thanks, Brian Lee __________________________________________________ Do You Yahoo!? Yahoo! Mail - Free email you can access from anywhere! http://mail.yahoo.com/ From tpavlic at netwalk.com Tue Sep 5 21:56:01 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Tue, 5 Sep 2000 21:56:01 -0400 Subject: poorly thought out HTTP/1.1 mandate Message-ID: <022901c017a5$9c930f60$0301830a@tednet> This is my first post on this list; I have only very recently subscribed. Because of this, I must apologize in advance of any of this has already been brought up. Personally, I disagree with the recent policy changes... http://www.arin.net/announcements/policy_changes.html made by ARIN. I feel that there has not been enough thought given to changes of this magnitude, and I think that the amount of argument in reponse to these changes (at least on the other groups to which I subscribe) backs me up on that. These things are causing me the most grief now that I have been forced to use name-based virtual hosts: * SSL * TLS (the server and client-side support (or lack of) of it) * FTP virtual hosts * Microsoft FrontPage Server Extensions * Old browsers which do not support HTTP/1.1 When I read the policy_changes.html, I get the odd feeling that large broadband ISPs are allocating more and more IPs for residential use and causing web hosting providers to give up many of their IPs. Why are web hosting providers being asked to give up their IPs when they are the ones who make up the Internet to which those residential users connect? By increasing the amount of real IPs given out to people who USE the Internet, ARIN is making it more difficult for those who make up the Internet to function! Rather than regulating we the web providers, why can't ARIN regulate those ISPs who are allocating huge amounts of IPs? What's wrong with forcing large cable and DSL providers to use the 10./8 class-A and use NAT? While this regulation seems radical, I would argue that it is MUCH less radical than the new regulations being made by ARIN. Personally I do not feel that large web hosting providers like the company which I represent are being well represented in ARIN. I worry that ARIN is being influenced too much by those who waste IPs rather than organizations who actually need them. I apologize if all of these points have already been brought up and answered, but I just think that ARIN's recent choices have been ridiculous and ever since I've been reading in other groups that many other people agree with me, I really felt that I needed to voice this. All the best -- Ted Pavlic NetWalk Communications tpavlic at netwalk.com From Brian_Lee at interliant.com Wed Sep 6 11:07:19 2000 From: Brian_Lee at interliant.com (Brian Lee) Date: Wed, 6 Sep 2000 11:07:19 -0400 Subject: Search Engines Message-ID: A number of people have told us that name based hosting affect placement on search engines. They basically say that search engines does not recognize host headers. Have you heard anything on this or do you know of any plans for search engines to comply with the new name based hosting policy? Brian Lee Interliant Sales Cobalt Subject Matter Expert 800 266-4000 x5135 (voice) 770 673-2200 (intl) 770 673-2298 (fax) Visit our Site at http://www.interliant.com blee at interliant.com From btorsey at HarvardNet.com Wed Sep 6 11:37:17 2000 From: btorsey at HarvardNet.com (Torsey, Brian) Date: Wed, 6 Sep 2000 11:37:17 -0400 Subject: Search Engines Message-ID: <864FA164044FD4118974009027C236D25F26B7@postal.harvardnet.com> If this is true, then the search engine folks need to fix the problem that they are creating. (Yo, ARIN ... time to start talking with the search engine folks!!!!) The policy is a good one. Its not perfect, and it will mean some issues will have to be delt with. If you wanted everything to be perfect and work as is out of the box ... why would you want to be an Engineer? :) Adapt/improve/enhance Progress does not come without conflict. Brian Torsey -----Original Message----- From: Brian Lee [mailto:Brian_Lee at interliant.com] Sent: Wednesday, September 06, 2000 11:07 AM To: policy at arin.net Subject: Search Engines A number of people have told us that name based hosting affect placement on search engines. They basically say that search engines does not recognize host headers. Have you heard anything on this or do you know of any plans for search engines to comply with the new name based hosting policy? Brian Lee Interliant Sales Cobalt Subject Matter Expert 800 266-4000 x5135 (voice) 770 673-2200 (intl) 770 673-2298 (fax) Visit our Site at http://www.interliant.com blee at interliant.com From maxiter at inetu.net Wed Sep 6 12:04:27 2000 From: maxiter at inetu.net (Mark) Date: Wed, 6 Sep 2000 12:04:27 -0400 (EDT) Subject: Search Engines In-Reply-To: <864FA164044FD4118974009027C236D25F26B7@postal.harvardnet.com> Message-ID: Adaptation? Has anybody considered having ISPs proxy all their dialup customers? That represents significant IP usage. Web hosts are not the only type of organization which requires large numbers of IPs. FWIW, overall, I do support ARIN's IP restrictions, but I do not support their instantaneous implementation of such policies. Such changes to take time, and it seems that time is not something which was taken into account with this policy. --------------------------------------------------- Mark Rekai - INetU, Inc.(tm) - http://www.INetU.net Electronic commerce - Web development - Web hosting Mark at INetU.net - Phone: (610) 266-7441 On Wed, 6 Sep 2000, Torsey, Brian wrote: > > If this is true, then the search engine folks need to fix the problem that > they are creating. (Yo, ARIN ... time to start talking with the search > engine folks!!!!) > > The policy is a good one. Its not perfect, and it will mean some issues will > have to be delt with. > > If you wanted everything to be perfect and work as is out of the box ... why > would you want to be an Engineer? :) > > > Adapt/improve/enhance > > Progress does not come without conflict. > > > Brian Torsey > > > -----Original Message----- > From: Brian Lee [mailto:Brian_Lee at interliant.com] > Sent: Wednesday, September 06, 2000 11:07 AM > To: policy at arin.net > Subject: Search Engines > > > A number of people have told us that name based hosting affect placement on > search engines. They basically say that search engines does not recognize > host headers. Have you heard anything on this or do you know of any plans > for search engines to comply with the new name based hosting policy? > > Brian Lee > Interliant Sales > Cobalt Subject Matter Expert > 800 266-4000 x5135 (voice) > 770 673-2200 (intl) > 770 673-2298 (fax) > Visit our Site at http://www.interliant.com > blee at interliant.com > From btorsey at HarvardNet.com Wed Sep 6 12:27:42 2000 From: btorsey at HarvardNet.com (Torsey, Brian) Date: Wed, 6 Sep 2000 12:27:42 -0400 Subject: Search Engines/IP restrictions/policy changes Message-ID: <864FA164044FD4118974009027C236D25F26B8@postal.harvardnet.com> Most ISP's at this point use dynamically assigned IP addressing... each IP address (just like a modem port) being used by the maximum the ISP can get away with ( 10 customers per modem port is a workable standard) That was one of the first "changes" ARIN made to how other people do business. It created issues for some ISP's, but they either adapted... or they were history. As for the policy on IP addresses and Virtual Web hosting ... it did not come out of the blue. Most people I know saw it coming for about a year. It just went from "strongly advised against" to "against policy". I do agree that it would have been nice if ARIN had worked with the Search Engine folks (Yahoo/Google/Lycos/Hotbot/Etc) to be ready for this change in policy. Any ARIN folks want to comment? Brian Torsey -----Original Message----- From: Mark [mailto:maxiter at inetu.net] Sent: Wednesday, September 06, 2000 12:04 PM To: Torsey, Brian Cc: policy at arin.net Subject: RE: Search Engines Adaptation? Has anybody considered having ISPs proxy all their dialup customers? That represents significant IP usage. Web hosts are not the only type of organization which requires large numbers of IPs. FWIW, overall, I do support ARIN's IP restrictions, but I do not support their instantaneous implementation of such policies. Such changes to take time, and it seems that time is not something which was taken into account with this policy. --------------------------------------------------- Mark Rekai - INetU, Inc.(tm) - http://www.INetU.net Electronic commerce - Web development - Web hosting Mark at INetU.net - Phone: (610) 266-7441 On Wed, 6 Sep 2000, Torsey, Brian wrote: > > If this is true, then the search engine folks need to fix the problem that > they are creating. (Yo, ARIN ... time to start talking with the search > engine folks!!!!) > > The policy is a good one. Its not perfect, and it will mean some issues will > have to be delt with. > > If you wanted everything to be perfect and work as is out of the box ... why > would you want to be an Engineer? :) > > > Adapt/improve/enhance > > Progress does not come without conflict. > > > Brian Torsey > > > -----Original Message----- > From: Brian Lee [mailto:Brian_Lee at interliant.com] > Sent: Wednesday, September 06, 2000 11:07 AM > To: policy at arin.net > Subject: Search Engines > > > A number of people have told us that name based hosting affect placement on > search engines. They basically say that search engines does not recognize > host headers. Have you heard anything on this or do you know of any plans > for search engines to comply with the new name based hosting policy? > > Brian Lee > Interliant Sales > Cobalt Subject Matter Expert > 800 266-4000 x5135 (voice) > 770 673-2200 (intl) > 770 673-2298 (fax) > Visit our Site at http://www.interliant.com > blee at interliant.com > From tpavlic at netwalk.com Wed Sep 6 13:47:31 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Wed, 6 Sep 2000 13:47:31 -0400 Subject: Search Engines/IP restrictions/policy changes References: <864FA164044FD4118974009027C236D25F26B8@postal.harvardnet.com> Message-ID: <04ce01c0182a$88bc2aa0$0301830a@tednet> > Most ISP's at this point use dynamically assigned IP addressing... each IP > address (just like a modem port) being used by the maximum the ISP can get > away with ( 10 customers per modem port is a workable standard) There's no reason why ISPs have to use real IP addresses to allocate to their users. A large cable provider could be providing addresses in the 10./8 network to their customers and do NAT. Changes like that are **MUCH** less radical than the changes recently made by ARIN and would have far less consequences. ARIN is taking away IP addresses from those who need them most -- THE INTERNET. They are taking them away from the Internet and giving them to the people who are using the Internet. As a consequence, it's becoming much more difficult for the Internet to provide services for those using it. > As for the policy on IP addresses and Virtual Web hosting ... it did not > come out of the blue. > Most people I know saw it coming for about a year. It just went from > "strongly advised against" to "against policy". The Internet is NOT YET READY for that change, however. ARIN itself on: http://www.arin.net/announcements/name_based_hosting.html Cited Internet **DRAFTS** as reference material. Right on those drafts it says in plain view: Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference mate- rial or to cite them other than as ``work in progress.'' That it is inappropriate to do such a thing. Secure communications is very big right now. SSL is an important thing and is widely used by many hosts. The nature of SSL does not allow for name-based SSL web hosting. TLS is hardly a solution to all of this. Using HTTP/1.1 over TLS is hardly supported by any webservers and hardly supported by any clients. FrontPage Server Extensions are big right now. Some web providers depend on them in order to give their clients access to their websites. Without some major modifications to how FrontPage works, FPSE are COMPLETELY INCOMPATIBLE with HTTP/1.1. FTP virtual hosting cannot be done name-based. Myself and other web hosting providers I have spoken to all feel that ARIN just did not consider the needs of the providers which host most of the sites on the Internet. One website companies are fine and small <256 host webhosting providers are fine, but anyone who breaks that 256 mark has a LOT of work to support name-based webhosting. If this policy change causes a loss of business to ISPs who host thousands of websites, those thousands of websites will be redistributed across the Internet to smaller web hosting providers who still use IP-based webhosting. Instead of one thousand sites going through one IP at a larger webhosting provider, there will be one thousand sites going through one thousand IPs all over the Internet. That's what makes no sense about this policy change -- it just causes problems and does not effectively SOLVE many (if ANY) problems. It just allows for big ISPs to give more IPs to their residential customers WHO DO NOT NEED THEM. > I do agree that it would have been nice if ARIN had worked with the Search > Engine folks (Yahoo/Google/Lycos/Hotbot/Etc) to be ready for this change in > policy. Personally, I don't see what the big deal about the search engines is. The search engines have an easy change to make -- they need to upgrade their web spiders to use HTTP/1.1 instead of HTTP/1.0. That's easy -- maybe another line of code to spit out a "Host:" line. I hardly think the search engine issue is any big deal. I don't think that ARIN should be held accountable for the search engines; HOWEVER, I do think that ARIN should be held accountable for the tremendous amount of trouble that name-based webhosting does to the rest of the world. The Internet still **NEEDS** IP-based webhosting for all of the reasons I mentioned above and I'm sure many more. It really makes me wonder who's running the show... All the best -- Ted From cscott at gaslightmedia.com Wed Sep 6 14:17:05 2000 From: cscott at gaslightmedia.com (Charles Scott) Date: Wed, 6 Sep 2000 14:17:05 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <864FA164044FD4118974009027C236D25F26B8@postal.harvardnet.com> Message-ID: On Wed, 6 Sep 2000, Torsey, Brian wrote: > As for the policy on IP addresses and Virtual Web hosting ... it did not > come out of the blue. > > Most people I know saw it coming for about a year. It just went from > "strongly advised against" to "against policy". Brian: Agreed that this policy didn't come "out of the blue". I think what caught most everyone off guard was that it was such a strong policy with so little mention of exceptions, dispite the fact that there had been some discussions that exceptions would be necessary. I also think that a strict implimentation of the policy with minimal room for exceptions would leave Web hosting operations feeling singled out since there's significant other areas where efficiency can be improved. Perhaps what this comes down to is how it's implimented. I wonder if anyone's had any experience yet dealing with this policy in a real allocation request? It would also be interesting to hear if anyone has had experience yet with any providers invoking this policy for downstream Web providers. Chuck Scott From maxiter at inetu.net Wed Sep 6 14:47:56 2000 From: maxiter at inetu.net (Mark) Date: Wed, 6 Sep 2000 14:47:56 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <04ce01c0182a$88bc2aa0$0301830a@tednet> Message-ID: Ted hit the issue right on the head, although, as he pointed out himself, there are issues beyond those he mentioned. SSL, FrontPage, FTP, and search engines will all have problems with name-based hosts. These, however, are probably the easier of the problems to address. What resolution is there for the hosting providers to use other network devices for proxying, load balancing, rate-shaping, and other QoS issues? These are large scale and extremely significant issues that may significantly undermine a web hosts ability to effectively conduct business. Not to mention, these devices become much harder, more time consuming, and quite possibly more costly to update than other services such as FrontPage and FTP. Closer to the heart of the problem may be web hosts' limited representation at ARIN policy meetings. This policy, no matter how good or bad it may be, did not come from ARIN the organization, it came from the voting members of ARIN. If web hosts want to see a reversal or revision of this policy, then they must represent themselves in ARINs coming policy meeting this October. --------------------------------------------------- Mark Rekai - INetU, Inc.(tm) - http://www.INetU.net Electronic commerce - Web development - Web hosting Mark at INetU.net - Phone: (610) 266-7441 On Wed, 6 Sep 2000, Ted Pavlic wrote: > > Most ISP's at this point use dynamically assigned IP addressing... each IP > > address (just like a modem port) being used by the maximum the ISP can get > > away with ( 10 customers per modem port is a workable standard) > > There's no reason why ISPs have to use real IP addresses to allocate to > their users. A large cable provider could be providing addresses in the > 10./8 network to their customers and do NAT. Changes like that are **MUCH** > less radical than the changes recently made by ARIN and would have far less > consequences. > > ARIN is taking away IP addresses from those who need them most -- THE > INTERNET. They are taking them away from the Internet and giving them to the > people who are using the Internet. As a consequence, it's becoming much more > difficult for the Internet to provide services for those using it. > > > As for the policy on IP addresses and Virtual Web hosting ... it did not > > come out of the blue. > > Most people I know saw it coming for about a year. It just went from > > "strongly advised against" to "against policy". > > The Internet is NOT YET READY for that change, however. ARIN itself on: > > http://www.arin.net/announcements/name_based_hosting.html > > Cited Internet **DRAFTS** as reference material. Right on those drafts it > says in plain view: > > Internet-Drafts are draft documents valid for a maximum of six months > and may be updated, replaced, or obsoleted by other documents at any > time. It is inappropriate to use Internet-Drafts as reference mate- > rial or to cite them other than as ``work in progress.'' > > That it is inappropriate to do such a thing. > > Secure communications is very big right now. SSL is an important thing and > is widely used by many hosts. The nature of SSL does not allow for > name-based SSL web hosting. TLS is hardly a solution to all of this. Using > HTTP/1.1 over TLS is hardly supported by any webservers and hardly supported > by any clients. > > FrontPage Server Extensions are big right now. Some web providers depend on > them in order to give their clients access to their websites. Without some > major modifications to how FrontPage works, FPSE are COMPLETELY INCOMPATIBLE > with HTTP/1.1. > > FTP virtual hosting cannot be done name-based. > > Myself and other web hosting providers I have spoken to all feel that ARIN > just did not consider the needs of the providers which host most of the > sites on the Internet. One website companies are fine and small <256 host > webhosting providers are fine, but anyone who breaks that 256 mark has a LOT > of work to support name-based webhosting. > > If this policy change causes a loss of business to ISPs who host thousands > of websites, those thousands of websites will be redistributed across the > Internet to smaller web hosting providers who still use IP-based webhosting. > Instead of one thousand sites going through one IP at a larger webhosting > provider, there will be one thousand sites going through one thousand IPs > all over the Internet. > > That's what makes no sense about this policy change -- it just causes > problems and does not effectively SOLVE many (if ANY) problems. It just > allows for big ISPs to give more IPs to their residential customers WHO DO > NOT NEED THEM. > > > I do agree that it would have been nice if ARIN had worked with the Search > > Engine folks (Yahoo/Google/Lycos/Hotbot/Etc) to be ready for this change > in > > policy. > > Personally, I don't see what the big deal about the search engines is. The > search engines have an easy change to make -- they need to upgrade their web > spiders to use HTTP/1.1 instead of HTTP/1.0. That's easy -- maybe another > line of code to spit out a "Host:" line. I hardly think the search engine > issue is any big deal. > > I don't think that ARIN should be held accountable for the search engines; > HOWEVER, I do think that ARIN should be held accountable for the tremendous > amount of trouble that name-based webhosting does to the rest of the world. > The Internet still **NEEDS** IP-based webhosting for all of the reasons I > mentioned above and I'm sure many more. > > It really makes me wonder who's running the show... > > All the best -- > Ted > From btorsey at HarvardNet.com Wed Sep 6 14:50:19 2000 From: btorsey at HarvardNet.com (Torsey, Brian) Date: Wed, 6 Sep 2000 14:50:19 -0400 Subject: Search Engines/IP restrictions/policy changes Message-ID: <864FA164044FD4118974009027C236D25F26BD@postal.harvardnet.com> The problem with telling people what would qualify as an exception ... suddenly every request for IP addresses seems to need routable IP addresses for that reason. Its human nature ... people want what they want ... and you give them a way to get it, all they have to do is lie. I know many of you would be shocked to see that customers/salespeople/etc lie about what they need. (<----sarcasim for those just joining us). I am looking forward to the Q&A at the ARIN meeting next month. I know I have alot of "What the #@$#@ were you thinking?" type questions as well. Brian Torsey -----Original Message----- From: Charles Scott [mailto:cscott at gaslightmedia.com] Sent: Wednesday, September 06, 2000 2:17 PM To: policy at arin.net Subject: RE: Search Engines/IP restrictions/policy changes On Wed, 6 Sep 2000, Torsey, Brian wrote: > As for the policy on IP addresses and Virtual Web hosting ... it did not > come out of the blue. > > Most people I know saw it coming for about a year. It just went from > "strongly advised against" to "against policy". Brian: Agreed that this policy didn't come "out of the blue". I think what caught most everyone off guard was that it was such a strong policy with so little mention of exceptions, dispite the fact that there had been some discussions that exceptions would be necessary. I also think that a strict implimentation of the policy with minimal room for exceptions would leave Web hosting operations feeling singled out since there's significant other areas where efficiency can be improved. Perhaps what this comes down to is how it's implimented. I wonder if anyone's had any experience yet dealing with this policy in a real allocation request? It would also be interesting to hear if anyone has had experience yet with any providers invoking this policy for downstream Web providers. Chuck Scott From Clay at exodus.net Wed Sep 6 15:32:23 2000 From: Clay at exodus.net (Clayton Lambert) Date: Wed, 6 Sep 2000 12:32:23 -0700 Subject: Search Engines/IP restrictions/policy changes In-Reply-To: Message-ID: <200009061932.MAA31259@exoserv.exodus.net> I think it is important that webhosters feel singled out in this respect. Webhosters that burn thru huge portions of IP address space on relatively few physical servers are not being fair to those Webhosters that are attempting to conserve IP address space. It is important to point out however, that all 'service providers' be held to the same standard. AppService providers and managed service (including security services) providers should be required to comply with this policy as well. Maybe an accepted standard ratio of physical devices per IP address should be established in each of the service provider scenarios. Such as: ISP's a 4-25 to 1 ratio of users to IP addresses? and something similar for ASP's and such. It is very important that search engines do not dictate the standard on this. Search engines could be configured to be just as effective with hostheaders and with IP addresses. one other thing, the policy CLEARLY indicates that exceptions are allowed, but that (rightfully so) they are reviewed on a per-case basis. There are known protocols that are not compatible with HTTP1.1 hostheaders. these exceptions must be documented. I do not see a problem with requiring the documentation of these exceptions. It is a small price to pay for the VERIFICATION of efficient IP address utilization. It is as if the webhosting companies want everything handed to them. They do not appear to take the need for efficient utilization seriously. This attitude needs to change. The finite amount of addressable space can be consumed in a short period of time if we would allow hundreds of IPs to be consumed for each webhosting server. We cannot allow this to occur. Clayton Lambert Exodus Communications -----Original Message----- From: policy-request at arin.net [mailto:policy-request at arin.net]On Behalf Of Charles Scott Sent: Wednesday, September 06, 2000 11:17 AM To: policy at arin.net Subject: RE: Search Engines/IP restrictions/policy changes On Wed, 6 Sep 2000, Torsey, Brian wrote: > As for the policy on IP addresses and Virtual Web hosting ... it did not > come out of the blue. > > Most people I know saw it coming for about a year. It just went from > "strongly advised against" to "against policy". Brian: Agreed that this policy didn't come "out of the blue". I think what caught most everyone off guard was that it was such a strong policy with so little mention of exceptions, dispite the fact that there had been some discussions that exceptions would be necessary. I also think that a strict implimentation of the policy with minimal room for exceptions would leave Web hosting operations feeling singled out since there's significant other areas where efficiency can be improved. Perhaps what this comes down to is how it's implimented. I wonder if anyone's had any experience yet dealing with this policy in a real allocation request? It would also be interesting to hear if anyone has had experience yet with any providers invoking this policy for downstream Web providers. Chuck Scott From tpavlic at netwalk.com Wed Sep 6 16:57:47 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Wed, 6 Sep 2000 16:57:47 -0400 Subject: Search Engines/IP restrictions/policy changes References: <200009061932.MAA31259@exoserv.exodus.net> Message-ID: <055601c01845$1ce5b4c0$0301830a@tednet> > I do not see a problem with requiring the documentation of these exceptions. > It is a small price to pay for the VERIFICATION of efficient IP address > utilization. It is as if the webhosting companies want everything handed to > them. They do not appear to take the need for efficient utilization > seriously. This attitude needs to change. The finite amount of addressable > space can be consumed in a short period of time if we would allow hundreds > of IPs to be consumed for each webhosting server. We cannot allow this to > occur. I don't think the webhosting companies are as evil as you make them seem. Webhosting companies are faced with various difficulties every day resulting from users "needing" technolgies (like FrontPage, NT web servers, ASPs, etc.) that do not necessarily conform to any known standard. It makes it hard enough to support these services... and then ARIN has the nerve to do something which breaks 50 to 100% of them? In order to implement such a policy on a serious web hosting provider, a great deal of work needs to be done... and still IP-based web hosting is NEEDED for some MAJOR transactions. The technology just does not CURRENTLY exist to support modern-day web transactions using a name-based paradigm. Webhosting companies are so upset about this because it makes no sense to hit us first -- and it just adds insult to injury to single us out. In the ARIN policy changes notice that not only were webhosting providers singled out and required to give away some very valuable IP addresses, but the largest available block of IP addresses that one provider can allocate has been INCREASED from a /14 block to a /13 block! The justification for this is: "in order to provide the space needed by large ISPs that historically utilize /14s in less than the 3 months' projection period that is described in ARIN's guidelines." The only providers that I can think of who would have such demands would be the larger cable and DSL providers which are growing faster and faster as the need for broadband residential Internet increases. True webhosting providers may be being a **LITTLE** unreasonable in their complaints, but it is NOT reasonable to be willing to just hand-out IP addresses to broadband providers like it's trick-or-treat. There are so many other schemes that these providers can use to meet their IP address needs that are much MUCH easier to implement than name-based webhosting. I think when most modern users of the Internet hear the word "Internet" the first thing that comes to their minds is "the web." Why rip valuable IP addresses away so harshly from webhosting providers who provide these people with their Internet in order to give them a real IP address that they will never need. IMHO, give small allocations of IP addreses to ISPs for the case of one-to-one NAT and make them do one-to-many NAT for all of the rest of their users. That's easy to do and most users won't notice a change. However -- it will free up a great deal of addresses for not only webhosters but whoever else would need them. All the best -- Ted Pavlic NetWalk Communications CPT Communications, Inc. CallTech Communications, LLC From bross at netrail.net Wed Sep 6 18:43:18 2000 From: bross at netrail.net (Brandon Ross) Date: Wed, 6 Sep 2000 18:43:18 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <055601c01845$1ce5b4c0$0301830a@tednet> Message-ID: On Wed, 6 Sep 2000, Ted Pavlic wrote: > Webhosting companies are so upset about this because it makes no sense to > hit us first -- and it just adds insult to injury to single us out. Webhosting companies were NOT hit first. Several years ago a policy requiring dialup connections to use dynamic IP addresses was implemented. I was working for a fairly large dialup company at the time and we had been using static addresses for customers. Yes, it was a painful conversion, not even so much because of the technology, but in educating users how to configure or re-configure their software. We took a lot of tech support calls when we made the conversion. We had to do a network wide software upgrade of our dial platforms to support it, but it got done, and I'm happy it was. Frankly, this thread just sounds like a bunch of excuses for why you don't feel like doing the work to convert over. The policy clearly states that exceptions are available. The policy says _nothing_ about ftp, so requesting addresses for ftp will still be allowed. ARIN has not stated what the criteria of the exceptions are, but if you look at their track record, you fill find that they have been reasonably fair when it comes to implementing new policy, I see no reason they should deviate from that behavior now. The only area that I see that is consuming addresses at an alarming rate without a good reason that should get attention first is cable modems. There still seems to be a perception that cable modem users need static addressing for some reason that escapes me. I have to say that I would much rather see ARIN require dynamic addressing (whether that's dynamically assigned through a PPP or DHCP like mechanism, or a NAT solution doesn't matter to me) than pursue the web hosting consumption. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From tpavlic at netwalk.com Wed Sep 6 20:11:16 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Wed, 6 Sep 2000 20:11:16 -0400 Subject: Search Engines/IP restrictions/policy changes References: Message-ID: <06e901c01860$24fd9720$0301830a@tednet> > Webhosting companies were NOT hit first. Several years ago a policy > requiring dialup connections to use dynamic IP addresses was implemented. > I was working for a fairly large dialup company at the time and we had > been using static addresses for customers. Yes, it was a painful > conversion, not even so much because of the technology, but in educating > users how to configure or re-configure their software. We took a lot of > tech support calls when we made the conversion. We had to do a network > wide software upgrade of our dial platforms to support it, but it got > done, and I'm happy it was. The reason why I said webhosting companies were hit first is because the conversion for webhosting companies is a LOT more complex than the conversion for ISPs in the situation you state. I've only worked for ISPs for five years, but we've ALWAYS distributed dynamic IP addresses to our customers unless they specifically wanted static IP addresses. Using dynamic IP configurations allows for a great deal of flexibility on the ISP end, and I'm sure even before ARIN made that mandate plenty of ISPs were using dynamic IP addresses. Perhaps it was more than five years ago when that policy was made, but I don't ever remember a painful conversion to dynamic IP addresses. About the only thing painful about dynamic IP addresses that I've ever worked with involved using RIP to advertise on which new terminal server an IP address popped up on, and that wasn't a very big issue. Sure -- in the conversion from a static scheme to a dynamic scheme it becomes complicated at the support level getting your customers to change... But converting to name-based webhosting is much more complicated in that certain technologies do not currently exist in web clients as well as servers to support the type of name-based hosting ARIN suggests. > Frankly, this thread just sounds like a bunch of excuses for why you don't > feel like doing the work to convert over. The policy clearly states that > exceptions are available. The policy says _nothing_ about ftp, so > requesting addresses for ftp will still be allowed. ARIN has not stated > what the criteria of the exceptions are, but if you look at their track > record, you fill find that they have been reasonably fair when it comes to > implementing new policy, I see no reason they should deviate from that > behavior now. Alright -- so FTP isn't a good complaint... * How about FrontPage Server Extensions? * How about SSL? * What about the damage this does to load balance infrastructures already in place? * What about non-HTTP/1.1 compliant browsers? And, as you said, ARIN has been pretty vague about what "exceptions" are. It's the principle of the thing -- ARIN has bit off more than they should be allowed to chew. They're being influenced by the cable companies (which you yourself speak of later on in your message) and other IP hogs that do not deserve so much credit. Granted, webhosting providers need to have a bigger voice in ARIN and it's their own fault for not having enough of a voice already, but ARIN should not become an organization which greatly favors one organization or another. ARIN should be an organization which supports the better oganization of the Internet. The policy change that ARIN has made hardly makes the Internet better for anyone except for those cable companies. > The only area that I see that is consuming addresses at an alarming rate > without a good reason that should get attention first is cable modems. > There still seems to be a perception that cable modem users need static > addressing for some reason that escapes me. I have to say that I would > much rather see ARIN require dynamic addressing (whether that's > dynamically assigned through a PPP or DHCP like mechanism, or a NAT > solution doesn't matter to me) than pursue the web hosting consumption. Every cable provider I have used and evaluated has used DHCP, but they are giving out real Internet addresses which makes no sense to me. In my opinion, they should be using NAT near their NAP and handing out 10./8 (or even 172.16./16! That'd be plenty!) addresses with DHCP to their customers. It would be EASY to convert to this sort of configuration AND would prevent customers from abusing their access by setting up servers. People who want static IPs could request them and one-to-one NATs could be setup. Now the argument against that would be that NAT adds far too much latency... But every cable provider I use already does such a poor job and managing the HUGE bottleneck at their NAP that I think *I* wouldn't notice a difference. And if that was a concern, setup multiple NAT devices and maybe even use transparent load balancing and transparent proxying to make things faster. The point is -- cable companies already **SHOULD** be doing at least half of the things mentioned above in the last couple of paragraphs. If they did that in the first place there would be FAR less of an IP problem on the Internet today. EVERYTHING in the above mentioned paragraphs is already being done in other organizations where efficiency and speed is something important. None of the above changes would require ANY *NEW* technology to be developed. It would be easy to implement ANY and ALL of the above mentioned changes. It would probably INCREASE the performance of cable Internet providers to try some of those changes. I just don't understand the trouble with pursuing the regulation of the gratutious and gluttonous allocation of IPs by cable companies rather than the NECESSARY allocation of IPs by webhosting providers. HTTP/1.1 was developed to make certain transactions easier and to help lessen the IP load on the Internet... but HTTP/1.1 is still very new. We're just not ready to use it yet. All the best -- Ted From bross at netrail.net Wed Sep 6 23:59:44 2000 From: bross at netrail.net (Brandon Ross) Date: Wed, 6 Sep 2000 23:59:44 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <06e901c01860$24fd9720$0301830a@tednet> Message-ID: On Wed, 6 Sep 2000, Ted Pavlic wrote: > The reason why I said webhosting companies were hit first is because the > conversion for webhosting companies is a LOT more complex than the > conversion for ISPs in the situation you state. I don't agree... > I've only worked for ISPs for five years, but we've ALWAYS distributed > dynamic IP addresses to our customers unless they specifically wanted static > IP addresses. Using dynamic IP configurations allows for a great deal of > flexibility on the ISP end, and I'm sure even before ARIN made that mandate > plenty of ISPs were using dynamic IP addresses. Perhaps it was more than > five years ago when that policy was made, but I don't ever remember a > painful conversion to dynamic IP addresses. About the only thing painful > about dynamic IP addresses that I've ever worked with involved using RIP to > advertise on which new terminal server an IP address popped up on, and that > wasn't a very big issue. > > Sure -- in the conversion from a static scheme to a dynamic scheme it > becomes complicated at the support level getting your customers to change... Like I said. In that case it wasn't so much the technology that was the problem, it was the support. > But converting to name-based webhosting is much more complicated in that > certain technologies do not currently exist in web clients as well as > servers to support the type of name-based hosting ARIN suggests. This is no different than the conversion to dynamic dialup IPs. There were several older clients in the market that didn't support dynamic IPs, they were instantly obsoleted by the change and technical support had to handle the load. The server support exists for plan old HTTP, the set of server software that doesn't support it will be obsoleted by the change. That's a normal effect of working in technology, if you don't continue to improve and adapt, you won't be around long. > Alright -- so FTP isn't a good complaint... > > * How about FrontPage Server Extensions? > * How about SSL? Those sound like perfectly reasonable and acceptable exceptions to the policy. (Actually if I had my way, and if what I understand about FrontPage is accurate, I have little sympathy, when a protocol is designed outside of the standards process it deserves to be broken. Of course, I fully understand the business constraints behind it and know that that's not a reasonable course of action). > * What about the damage this does to load balance infrastructures already in > place? They can adapt. There are many load balancing devices on the market that can look deeper into a TCP session and load balance, traffic shape, or traffic redirect based on this sort of information. Again, progress can't be made without breaking a few things at some point. > * What about non-HTTP/1.1 compliant browsers? They just need an upgrade. It should be easy enough to identify a non-compliant browser and send an informational page sending the user to an upgrade site. > And, as you said, ARIN has been pretty vague about what "exceptions" are. Yes, but I think they have to be. As soon as exceptions are documented, all of a sudden everyone needs the exception. > It's the principle of the thing -- ARIN has bit off more than they should be > allowed to chew. They're being influenced by the cable companies (which you > yourself speak of later on in your message) and other IP hogs that do not > deserve so much credit. Granted, webhosting providers need to have a bigger > voice in ARIN and it's their own fault for not having enough of a voice > already, but ARIN should not become an organization which greatly favors one > organization or another. ARIN should be an organization which supports the > better oganization of the Internet. The policy change that ARIN has made > hardly makes the Internet better for anyone except for those cable > companies. You've already mentioned the participation issue so I won't bring that up again. I do want to make it clear that I don't believe it's the cable company's fault particularly, just that I agree there are other places effort to conserve address space should be focused there first. > > The only area that I see that is consuming addresses at an alarming rate > > without a good reason that should get attention first is cable modems. > > There still seems to be a perception that cable modem users need static > > addressing for some reason that escapes me. I have to say that I would > > much rather see ARIN require dynamic addressing (whether that's > > dynamically assigned through a PPP or DHCP like mechanism, or a NAT > > solution doesn't matter to me) than pursue the web hosting consumption. > > Every cable provider I have used and evaluated has used DHCP, but they are > giving out real Internet addresses which makes no sense to me. I should be a bit more clear. Yes, they use DHCP, but as of the last time I checked (and it was admittedly a while ago) they assigned the same address to a customer all the time. To pick on the largest, @Home, according to whois, is allocated 35 /16's. According to their web page they just passed 2 million subscribers in August. If my math is correct, that's almost 2.3 million addresses in all. To your point, the largest of the web hosting companies, Verio, has about half a million web sites. Assuming they are all hosted on individual IPs, if some sort of dynamic addressing at @Home saved only half of their allocation, that's twice as good as requiring Verio to put ALL of their web sites behind a single IP, which seems quite unlikely because of the exceptions you've mentioned. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From tpavlic at netwalk.com Thu Sep 7 01:27:56 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Thu, 7 Sep 2000 01:27:56 -0400 Subject: Search Engines/IP restrictions/policy changes References: Message-ID: <007801c0188c$613f4b80$0301830a@tednet> > Like I said. In that case it wasn't so much the technology that was the > problem, it was the support. My argument now is that changing to name-based and suggesting webhosting providers use technology which has expired in IETF DRAFT form is not appropriate because the technology just isn't ready. When the conversion was needed to be made from static to dynamic IPs, the technology DID exist and it was just a matter of upgrading to it. > This is no different than the conversion to dynamic dialup IPs. There > were several older clients in the market that didn't support dynamic IPs, > they were instantly obsoleted by the change and technical support had to > handle the load. The server support exists for plan old HTTP, the set of > server software that doesn't support it will be obsoleted by the change. > That's a normal effect of working in technology, if you don't continue to > improve and adapt, you won't be around long. I just have to argue that the "web" is made up of much more than just HTTP. My "web" consists of HTTP, HTTPS, FTP, and all of the enhancements to each one of those (like FrontPage and the various schemes which help make web servers run more smoothly). If HTTP servers don't support HTTP/1.1, I definitely think that those servers should be upgraded. However, even the most upgraded HTTP server is not going to support SSL, which seems to be a very important thing in modern-day web commerce, with name-based hosting. That same server is going to require a great deal of extra work to support FTP and FrontPage and who knows what else. A webhosting provider with thousands of clients all who have varying needs is crippled by having to switch to name-based hosting. Necessity is the mother of invention... and I do agree that there needs to be more invention on the Internet to support the great IP needs... but I do not think it is right for ARIN to create sudden necessity. I feel it would be better for ARIN to work with the IETF to develop new technologies which make this conversion possible and *THEN* force the changeover. Right now it feels like ARIN is pushing webhosting providers over the edge of a cliff and only giving a parachute to those who specifically ask for one and have good enough reasons as to why they need help to keep from perishing after the fall. > > * How about FrontPage Server Extensions? > > * How about SSL? > Those sound like perfectly reasonable and acceptable exceptions to the > policy. (Actually if I had my way, and if what I understand about > FrontPage is accurate, I have little sympathy, when a protocol is designed > outside of the standards process it deserves to be broken. Of course, I > fully understand the business constraints behind it and know that that's > not a reasonable course of action). I agree that FrontPage doesn't (in theory) deserve much influence in this matter. Microsoft and Ready-To-Run software have done a horrible job with that software. Without name-based webhosting FrontPage causes a plethora of problems. I've been on MS and RTR's tails for years about this software, but I'm often only placated by a VP or ignored completely. However -- as you say you understand how impossible it would be for me to say to my users that I just can't support FrontPage anymore as it doesn't meet ARIN standards. Rather than MS getting a great deal of complaints, I'm sure I would. I'd upgrade completely to WebDAV, but that would force my users to resort to make their websites independent of the FrontPage extensions, which would still be a bad idea. > > * What about the damage this does to load balance infrastructures already in > > place? > They can adapt. There are many load balancing devices on the market that > can look deeper into a TCP session and load balance, traffic shape, or > traffic redirect based on this sort of information. Again, progress can't > be made without breaking a few things at some point. That's true. Really I think IP (even v6) needs to be modified to make these sorts of things a lot more doable. Another layer of abstraction which would carry host information could be added which would make all TCP and UDP services avialable on a name-based level. Already policy routing exists, but I think that even policy routing needs to be able to look closer at a packet to decide exactly what kind of packet it is. I don't think that processing the current information being passed on the Internet could be done fast enough to provide efficient name-based load balancing/etc. A standard needs to be made for name-based transactions. Rather than making HTTP/1.1 smarter and dragging the rest of the services along with it, it would be better to reform the whole Host-to-Host transport layer to allow all applications protocols to be able to work on a name-based paradigm. > > * What about non-HTTP/1.1 compliant browsers? > They just need an upgrade. It should be easy enough to identify a > non-compliant browser and send an informational page sending the user to > an upgrade site. It is easy enough -- and that's being done. Apache's "ServerPath" virtual host matching provides a temporary solution that allows those browsers to browse those websites without sending "Host:" tags as long as that website doesn't link to any absolute or "/...." pages. Every link must be VERY relative. > > And, as you said, ARIN has been pretty vague about what "exceptions" are. > Yes, but I think they have to be. As soon as exceptions are documented, > all of a sudden everyone needs the exception. I just don't feel much research was done. ARIN should have been able to provide more information. The thing is -- medium-to-large ISPs will survive the ARIN changes. Those medium-to-large ISPs can afford the research and development necessary to provide grade-A web hosting. Smaller and less experienced ISPs will suffer. Like an Internet tax, I feel that the ARIN policy changes may have been too much influenced by those interested in the proliferation of big business and the death of small business. > > Every cable provider I have used and evaluated has used DHCP, but they are > > giving out real Internet addresses which makes no sense to me. > I should be a bit more clear. Yes, they use DHCP, but as of the last time > I checked (and it was admittedly a while ago) they assigned the same > address to a customer all the time. My current cable provide often provides the same IP to each user, but that's just the nature of DHCP. Once a user's DHCP lease is up, they release their IP and ask for a new one. Usually at that time they get the same IP back. That's just one of the fun things about DHCP. I know of many people who use dynamic DNS updates to advertise their new IPs whenever they get one. That allows them to have named websites on the Internet without paying for a static IP (even though this is against cable policy). > To pick on the largest, @Home, according to whois, is allocated > 35 /16's. According to their web page they just passed 2 million > subscribers in August. If my math is correct, that's almost 2.3 million > addresses in all. To your point, the largest of the web hosting > companies, Verio, has about half a million web sites. Assuming they are > all hosted on individual IPs, if some sort of dynamic addressing at @Home > saved only half of their allocation, that's twice as good as requiring > Verio to put ALL of their web sites behind a single IP, which seems quite > unlikely because of the exceptions you've mentioned. You're right -- 35 /16's is just under 2.3 million addresses. And expanding on your point, a GREAT deal of the webhosting providers who are affected by the ARIN changes provide *MUCH* less than the 500k websites you mentioned Verio provides. Will it really provide many more addresses to the world to pick on the webhosting providers? Further expanding... @Home is a large provider, but is one of MANY. There is a greater threat to IPv4 addresses being wasted by providers like @Home than there is being wasted by webhosting providers. I should hope issues like these are brought up in October. All the best -- Ted From cscott at gaslightmedia.com Thu Sep 7 09:02:17 2000 From: cscott at gaslightmedia.com (Charles Scott) Date: Thu, 7 Sep 2000 09:02:17 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: Message-ID: On Wed, 6 Sep 2000, Brandon Ross wrote: > > * What about non-HTTP/1.1 compliant browsers? > > They just need an upgrade. It should be easy enough to identify a > non-compliant browser and send an informational page sending the user to > an upgrade site. Brandon: One thing to keep in mind here is that the relationship between an ISP and a dial-in customer is very different than between a Web provider and those browsing their sites. In the case of the dial-in user, there is only one person to fix. The Web provider needs to deal with everyone who can't properly access or use the Web site. It's also a necessity that an ISP provide technical support for dial-in customers to ensure they can connect to and use the Internet. The Web provider is not in a position to provide a level of technical support to Web site users that would be required for browser updates. In addition, while the Web site can suggest that a user update their browser, there are a large number of very non-technical users who either won't or simply can't deal with updating their browsers and because they are so non-technical they would tend to blame whomever suggested they make a change if something goes wrong with the update. The most important thing is the perspective of the Web site owner. In many cases site owners don't particularly care if 5% or 10% of users can't use their site and in some cases where the site uses special plug-in's or advanced browser features they may be happy with only 50% of users being able to appreciate their site. However, and believe me I know from experience, the owners of commercial and E-Commerce sites can absolutely pannic when they hear of a single user who can't access and use their site. I don't know what the percentages of old browsers in use across the network are (perhaps someone can point us to some guess of 1.0 browsers), but I do know that we see some pretty old ones come into our sites. For these reasons, some Web site owners are anal about compatibiltiy. Obviously at some point everyone will have to accept some problems with clueless users who will never update their browsers and I'd think the pecentages of those users may depend somewhat on the type of the site. It would seem that the Web provider and site owner are in a better position to make this decision based on their needs and those of their users. (Taking another step up the soapbox) So, it seems that every aspect of this whole debate is more complex than it may seem on the surface because everyone has their own limitations, needs and expectations. It brings me to the thought, as things get ever more complex, that IP conservation needs to be handled more and more on an individual bases in cooperation with those who are providing the allocations. I guess this is contrary to a my earlier comments about ARIN providing more detail on Web hosting exceptions and is contrary to the direction of making more specific policy regarding address utilization. I wonder if the overall policy was redirected toward a flexible and cooperative realationship between providers and consumers of IP address space if it would be possible to receive more cooperation in conserving addresses. Knowing how much work and detail can be required to verify compliance with specific utilization policy it would seem that a similar amount of time interactively working the situation, and what can and can't be done in a particular case, could be more productive. Chuck Scott From bross at netrail.net Thu Sep 7 14:59:42 2000 From: bross at netrail.net (Brandon Ross) Date: Thu, 7 Sep 2000 14:59:42 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <007801c0188c$613f4b80$0301830a@tednet> Message-ID: On Thu, 7 Sep 2000, Ted Pavlic wrote: > > Like I said. In that case it wasn't so much the technology that was the > > problem, it was the support. > > My argument now is that changing to name-based and suggesting webhosting > providers use technology which has expired in IETF DRAFT form is not > appropriate because the technology just isn't ready. I'm not sure exactly what you are referring to, but RFC 2068 in section 5.1.2 certainly seems to describe the method of sending the hostname with the GET request. RFC 2068 is on a standards track. Am I missing something? > When the conversion was needed to be made from static to dynamic IPs, the > technology DID exist and it was just a matter of upgrading to it. Agreed. > > This is no different than the conversion to dynamic dialup IPs. There > > were several older clients in the market that didn't support dynamic IPs, > > they were instantly obsoleted by the change and technical support had to > > handle the load. The server support exists for plan old HTTP, the set of > > server software that doesn't support it will be obsoleted by the change. > > That's a normal effect of working in technology, if you don't continue to > > improve and adapt, you won't be around long. > > I just have to argue that the "web" is made up of much more than just HTTP. > My "web" consists of HTTP, HTTPS, FTP, and all of the enhancements to each > one of those (like FrontPage and the various schemes which help make web > servers run more smoothly). So perhaps we have a semantic problem here. To me, the term "webhosting" in ARIN's policy means plain HTTP, hot HTTPS, FTP, or anything else, but I do see your point. I would suggest re-wording the policy to say HTTP hosting. > Already policy routing exists, but I think that even policy routing needs to > be able to look closer at a packet to decide exactly what kind of packet it > is. I don't think that processing the current information being passed on > the Internet could be done fast enough to provide efficient name-based load > balancing/etc. Well, part of the problem is that the information needed isn't, and really can't be, neatly contained in a single packet, you have to capture the first x packets in a session to capture things like URL and whatnot, which means doing some TCP spoofing, however the technology is there and is getting better by the minute. > > > Every cable provider I have used and evaluated has used DHCP, but they > are > > > giving out real Internet addresses which makes no sense to me. > > I should be a bit more clear. Yes, they use DHCP, but as of the last time > > I checked (and it was admittedly a while ago) they assigned the same > > address to a customer all the time. > > My current cable provide often provides the same IP to each user, but that's > just the nature of DHCP. > > Once a user's DHCP lease is up, they release their IP and ask for a new one. > Usually at that time they get the same IP back. That's just one of the fun > things about DHCP. I'm well aware of how DHCP works. The point that I was trying to make is that, at least the last time I looked, there was a 1 to 1 ratio of IPs to cable customers, thereby guaranteeing that they would always get the same address. > You're right -- 35 /16's is just under 2.3 million addresses. > > And expanding on your point, a GREAT deal of the webhosting providers who > are affected by the ARIN changes provide *MUCH* less than the 500k websites > you mentioned Verio provides. Will it really provide many more addresses to > the world to pick on the webhosting providers? I do think the savings available by using host based webhosting is significant, but I'm pointing out that some flavor of dynamically addressing cable modems would provide even more savings. > Further expanding... @Home is a large provider, but is one of MANY. There is > a greater threat to IPv4 addresses being wasted by providers like @Home than > there is being wasted by webhosting providers. I agree, but in all fairness, there are many web hosting companies as well. I point out the largest and assume that they come close to representing the rest of the industry, I could very well be incorrect. > I should hope issues like these are brought up in October. I'm sure they will be. Better yet, go to the meeting and make sure they are. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From bross at netrail.net Thu Sep 7 15:04:52 2000 From: bross at netrail.net (Brandon Ross) Date: Thu, 7 Sep 2000 15:04:52 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: Message-ID: On Thu, 7 Sep 2000, Charles Scott wrote: > I don't know what the percentages of old browsers in use across the > network are (perhaps someone can point us to some guess of 1.0 > browsers), but I do know that we see some pretty old ones come into our > sites. That is really the core of the discussion. It would be quite helpful if some large webhoster out there could do a study to determine the number of non 1.1 browsers are still in use. I don't work for a large webhoster anymore or I'd find out myself. All I know is that it's been a long time since I've seen anyone using an older browser, and that's amongst my non-technical friends and family members, not necessarily geeks that are always running the latest and greatest. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From tpavlic at netwalk.com Thu Sep 7 16:22:19 2000 From: tpavlic at netwalk.com (Ted Pavlic) Date: Thu, 7 Sep 2000 16:22:19 -0400 Subject: Search Engines/IP restrictions/policy changes References: Message-ID: <013d01c01909$530795e0$8900810a@TEDDY> > > My argument now is that changing to name-based and suggesting > > webhosting providers use technology which has expired in IETF DRAFT > > form is not appropriate because the technology just isn't ready. > I'm not sure exactly what you are referring to, but RFC 2068 in section > 5.1.2 certainly seems to describe the method of sending the hostname with > the GET request. RFC 2068 is on a standards track. Am I missing > something? I'm speaking specifically about SSL and TLS. HTTP/1.1 supports name-based hosting, of course. However it is not currently very possible to support name-based secure hosting. ARIN gave these references with respect to secure webhosting: http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-https-03.txt http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-http-upgrade-05.txt http://info.internet.isi.edu/in-notes/rfc/files/rfc2246.txt Two of those are EXPIRED IETF **DRAFTS** which should never be used for reference. Those particular drafts suggest a way for name-based hosting to exchange the name-based information BEFORE the TLS handshake and then switch to TLS once the host has been established. The nature of SSL causes it to exchange certificates BEFORE any host informatoin is sent. Because of this, in order to provide SSL webhosts a hosting provider has to use IP-based web hosting. >From what I know, there is currently no standard for exchanging host information before a TLS or SSL handshake. These standards need to be in place and then implemented in web browsers as well as servers in order for name-base hosting to expand onto secure sites. > > I just have to argue that the "web" is made up of much more than just > > HTTP. My "web" consists of HTTP, HTTPS, FTP, and all of the > > enhancements to each one of those (like FrontPage and the various > > schemes which help make web servers run more smoothly). > So perhaps we have a semantic problem here. To me, the term "webhosting" > in ARIN's policy means plain HTTP, hot HTTPS, FTP, or anything else, but I > do see your point. I would suggest re-wording the policy to say HTTP > hosting. I agree -- because that does seem to be what ARIN means. > > Already policy routing exists, but I think that even policy routing > > needs to be able to look closer at a packet to decide exactly what kind > > of packet it is. I don't think that processing the current information > > being passed on the Internet could be done fast enough to provide > > efficient name-based load balancing/etc. > Well, part of the problem is that the information needed isn't, and really > can't be, neatly contained in a single packet, you have to capture the > first x packets in a session to capture things like URL and whatnot, which > means doing some TCP spoofing, however the technology is there and is > getting better by the minute. That's why I think that there needs to be some abstraction layer between the actual services that name-base hosting affects and TCP. An easy way for web servers, FTP servers, load balancers, etc. to see exactly where the traffic is going. Unfortunately, without sending bytes and bytes and bytes of variable length information, it's almost as if we need another set of IPs on top of IP. Send one IP with a four-byte OIP (Other-IP) which looks up on some server somewhere and ends up resolving the 20-byte name. :) Recently someone suggested to me that perhaps the SYN which starts a connection which usually carries no data could carry data -- the name of the host to which the end-user is connecting. Web browsers would somehow have to be able to tell their TCP stack to send this information with the SYN... The receiving server could then parse this information. The technology is there -- it just needs to be put together. > > Once a user's DHCP lease is up, they release their IP and ask for a new > > one. Usually at that time they get the same IP back. That's just one of > > the fun things about DHCP. > I'm well aware of how DHCP works. The point that I was trying to make is > that, at least the last time I looked, there was a 1 to 1 ratio of IPs to > cable customers, thereby guaranteeing that they would always get the same > address. I didn't mean to sound like I was talking down to you; I apologize if it seemed like I was. Considering a great deal of cable subscribers stay on-line 24/7 or close-to-it, I think it would be difficult for providers to keep a less than 1 to 1 ratio of IPs to customers. That's why I thought that handing out 10./8 or 172.16./16 (or both!) using the existing DHCP would be a decent idea. The only thing the cable providers would have to worry about is the latency doing the NAT, and that could be taken care of a number of ways. People who wanted static IPs could easily be setup with one-to-one NAT. Just like when dial-up ISPs were told to use dynamic addresses, this sort of conversion involves EXISTING technology... and I don't really think would require that much effort to convert. Note that the local DSL providers in my area actually provide STATIC IPs to their customers, from what I remember. I have to wonder if that's within the current regulations. > I do think the savings available by using host based webhosting is > significant, but I'm pointing out that some flavor of dynamically > addressing cable modems would provide even more savings. Still -- remember that in ARIN's policy changes not only are they forcing webhosting providers to move to name-based hosting, but they are expanding the largest block an ISP (like @Home) can allocate at one time from /14 to /13. To me it looks like they're taking the IPs (even though they probably can't free up THAT many from webhosting, IMO) from webhosting providers and giving them to cable ISPs. > > I should hope issues like these are brought up in October. > I'm sure they will be. Better yet, go to the meeting and make sure they > are. I'm just a lowly young college student -- not sure my schedule will allow it. I just hope that there's someone out there who feels similar to me who will be going who bring these issues up. All the best -- Ted From bross at netrail.net Thu Sep 7 17:34:57 2000 From: bross at netrail.net (Brandon Ross) Date: Thu, 7 Sep 2000 17:34:57 -0400 (EDT) Subject: Search Engines/IP restrictions/policy changes In-Reply-To: <013d01c01909$530795e0$8900810a@TEDDY> Message-ID: On Thu, 7 Sep 2000, Ted Pavlic wrote: > > > My argument now is that changing to name-based and suggesting > > > webhosting providers use technology which has expired in IETF DRAFT > > > form is not appropriate because the technology just isn't ready. > > I'm not sure exactly what you are referring to, but RFC 2068 in section > > 5.1.2 certainly seems to describe the method of sending the hostname with > > the GET request. RFC 2068 is on a standards track. Am I missing > > something? > > I'm speaking specifically about SSL and TLS. HTTP/1.1 supports name-based > hosting, of course. However it is not currently very possible to support > name-based secure hosting. Ahh, gotcha. Like I said before, when I read webhosting, I read HTTP. > ARIN gave these references with respect to secure webhosting: > > http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-https-03.txt > > http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-http-upgrade-05.txt > > http://info.internet.isi.edu/in-notes/rfc/files/rfc2246.txt > > Two of those are EXPIRED IETF **DRAFTS** which should never be used for > reference. Those particular drafts suggest a way for name-based hosting to > exchange the name-based information BEFORE the TLS handshake and then switch > to TLS once the host has been established. I completely agree that it is quite inappropriate to reference ID's at all, regardless of whether or not they are current. ARIN folks, you should seriously consider removing those links! > That's why I think that there needs to be some abstraction layer between the > actual services that name-base hosting affects and TCP. An easy way for web > servers, FTP servers, load balancers, etc. to see exactly where the traffic > is going. Unfortunately, without sending bytes and bytes and bytes of > variable length information, it's almost as if we need another set of IPs on > top of IP. Send one IP with a four-byte OIP (Other-IP) which looks up > on some server somewhere and ends up resolving the 20-byte name. :) is right. > > > Once a user's DHCP lease is up, they release their IP and ask for a new > > > one. Usually at that time they get the same IP back. That's just one of > > > the fun things about DHCP. > > I'm well aware of how DHCP works. The point that I was trying to make is > > that, at least the last time I looked, there was a 1 to 1 ratio of IPs to > > cable customers, thereby guaranteeing that they would always get the same > > address. > > I didn't mean to sound like I was talking down to you; I apologize if it > seemed like I was. Didn't take it that way, sorry if my reply seemed like I thought that you seemed to be talking down to me. ;-) > Considering a great deal of cable subscribers stay on-line 24/7 or > close-to-it, I think it would be difficult for providers to keep a less > than 1 to 1 ratio of IPs to customers. I think that's a misconception. Whenever I've visited non-technical friends at home, I've always found that if they aren't using their PC, they turn it off. I would be willing to bet that at least half of them are turned off at any particular time. > That's why I thought that handing out 10./8 or 172.16./16 (or both!) > using the existing DHCP would be a decent idea. The only thing the > cable providers would have to worry about is the latency doing the NAT, > and that could be taken care of a number of ways. People who wanted > static IPs could easily be setup with one-to-one NAT. Just like when > dial-up ISPs were told to use dynamic addresses, this sort of conversion > involves EXISTING technology... and I don't really think would require > that much effort to convert. I completely agree, but I'm agnostic of the method they use to go dynamic. > Note that the local DSL providers in my area actually provide STATIC IPs to > their customers, from what I remember. I have to wonder if that's within the > current regulations. I would hope not, but I suspect there isn't a policy on it, there certainly isn't a documented one I can find. > > I do think the savings available by using host based webhosting is > > significant, but I'm pointing out that some flavor of dynamically > > addressing cable modems would provide even more savings. > > Still -- remember that in ARIN's policy changes not only are they forcing > webhosting providers to move to name-based hosting, but they are expanding > the largest block an ISP (like @Home) can allocate at one time from /14 to > /13. To me it looks like they're taking the IPs (even though they probably > can't free up THAT many from webhosting, IMO) from webhosting providers and > giving them to cable ISPs. I really don't see it that way. It's not like we're out of addresses yet. I do see it as focusing efforts in the wrong priority order, however. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From Clay at exodus.net Thu Sep 7 18:40:23 2000 From: Clay at exodus.net (Clayton Lambert) Date: Thu, 7 Sep 2000 15:40:23 -0700 Subject: Search Engines/IP restrictions/policy changes In-Reply-To: Message-ID: <200009072240.PAA25502@exoserv.exodus.net> There are several web-systems vendors that are working on methods to handle SSL and other traditional non-HTTP1.1 compatible protocols, via load balancing devices. I am not implying that we should force webhosting providers (or other services-providers) into vendor-specific solutions. But I think it is important to support the idea of host-header compatibility. Customers of webhosting providers typically "own" the URL, not the IP address. The URL is something that can be utilized in embedded programming and it allows for scalability and modification of infrastructure that is not otherwise available if the IP address is embeded into software (as an example). And for the webhosting provider that indicated (in a previous email) that I was biased against the webhosting community, please realize that I am very neutral with respect to these policy discussions. In fact, the concept of URL based services-hosting is somewhat detrimental to my company's core business: Colocation. Think about it, if you base everything on names, it is no problem (effectively) for you to leave my company (as a colocation provider) and for you to go to one of our competitors. If you are embedding (or even just implementing large, complex hosting solutions) IP addresses in your configurations and infrastructure, or even software...You are going to be MUCH more inclined to remain at my facility. So, be assured that my perspective on this topic is very much fucused on the idea of conservation of usage. Clayton Lambert Exodus Communications -----Original Message----- From: bross at ogre.atlanta.netrail.net [mailto:bross at ogre.atlanta.netrail.net]On Behalf Of Brandon Ross Sent: Thursday, September 07, 2000 2:35 PM To: Ted Pavlic Cc: Clayton Lambert; 'Charles Scott'; policy at arin.net Subject: Re: Search Engines/IP restrictions/policy changes On Thu, 7 Sep 2000, Ted Pavlic wrote: > > > My argument now is that changing to name-based and suggesting > > > webhosting providers use technology which has expired in IETF DRAFT > > > form is not appropriate because the technology just isn't ready. > > I'm not sure exactly what you are referring to, but RFC 2068 in section > > 5.1.2 certainly seems to describe the method of sending the hostname with > > the GET request. RFC 2068 is on a standards track. Am I missing > > something? > > I'm speaking specifically about SSL and TLS. HTTP/1.1 supports name-based > hosting, of course. However it is not currently very possible to support > name-based secure hosting. Ahh, gotcha. Like I said before, when I read webhosting, I read HTTP. > ARIN gave these references with respect to secure webhosting: > > http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-https-03.txt > > http://www.ics.uci.edu/pub/ietf/http/draft-ietf-tls-http-upgrade-05.txt > > http://info.internet.isi.edu/in-notes/rfc/files/rfc2246.txt > > Two of those are EXPIRED IETF **DRAFTS** which should never be used for > reference. Those particular drafts suggest a way for name-based hosting to > exchange the name-based information BEFORE the TLS handshake and then switch > to TLS once the host has been established. I completely agree that it is quite inappropriate to reference ID's at all, regardless of whether or not they are current. ARIN folks, you should seriously consider removing those links! > That's why I think that there needs to be some abstraction layer between the > actual services that name-base hosting affects and TCP. An easy way for web > servers, FTP servers, load balancers, etc. to see exactly where the traffic > is going. Unfortunately, without sending bytes and bytes and bytes of > variable length information, it's almost as if we need another set of IPs on > top of IP. Send one IP with a four-byte OIP (Other-IP) which looks up > on some server somewhere and ends up resolving the 20-byte name. :) is right. > > > Once a user's DHCP lease is up, they release their IP and ask for a new > > > one. Usually at that time they get the same IP back. That's just one of > > > the fun things about DHCP. > > I'm well aware of how DHCP works. The point that I was trying to make is > > that, at least the last time I looked, there was a 1 to 1 ratio of IPs to > > cable customers, thereby guaranteeing that they would always get the same > > address. > > I didn't mean to sound like I was talking down to you; I apologize if it > seemed like I was. Didn't take it that way, sorry if my reply seemed like I thought that you seemed to be talking down to me. ;-) > Considering a great deal of cable subscribers stay on-line 24/7 or > close-to-it, I think it would be difficult for providers to keep a less > than 1 to 1 ratio of IPs to customers. I think that's a misconception. Whenever I've visited non-technical friends at home, I've always found that if they aren't using their PC, they turn it off. I would be willing to bet that at least half of them are turned off at any particular time. > That's why I thought that handing out 10./8 or 172.16./16 (or both!) > using the existing DHCP would be a decent idea. The only thing the > cable providers would have to worry about is the latency doing the NAT, > and that could be taken care of a number of ways. People who wanted > static IPs could easily be setup with one-to-one NAT. Just like when > dial-up ISPs were told to use dynamic addresses, this sort of conversion > involves EXISTING technology... and I don't really think would require > that much effort to convert. I completely agree, but I'm agnostic of the method they use to go dynamic. > Note that the local DSL providers in my area actually provide STATIC IPs to > their customers, from what I remember. I have to wonder if that's within the > current regulations. I would hope not, but I suspect there isn't a policy on it, there certainly isn't a documented one I can find. > > I do think the savings available by using host based webhosting is > > significant, but I'm pointing out that some flavor of dynamically > > addressing cable modems would provide even more savings. > > Still -- remember that in ARIN's policy changes not only are they forcing > webhosting providers to move to name-based hosting, but they are expanding > the largest block an ISP (like @Home) can allocate at one time from /14 to > /13. To me it looks like they're taking the IPs (even though they probably > can't free up THAT many from webhosting, IMO) from webhosting providers and > giving them to cable ISPs. I really don't see it that way. It's not like we're out of addresses yet. I do see it as focusing efforts in the wrong priority order, however. Brandon Ross 404-522-5400 EVP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! From Clay at exodus.net Thu Sep 7 19:15:55 2000 From: Clay at exodus.net (Clayton Lambert) Date: Thu, 7 Sep 2000 16:15:55 -0700 Subject: poorly thought out HTTP/1.1 mandate In-Reply-To: <022901c017a5$9c930f60$0301830a@tednet> Message-ID: <200009072315.QAA30965@exoserv.exodus.net> Large ISPs are as much a part of the Internet as webhosting entities. Your argument doesn't make sense. The policy clearly indicates that HTTP1.1 hostheaders be utilized WHERE THEY CAN BE. This mandate is critical and its necessity can be seen in the example of webhosting; one web hosting device (one physical box connected to the 'Net) may inefficiently burn hundreds of IP addresses. What for? most the time, these addresses are 'given' to the webhosting customer to establish some solidity and a barrier (configuration complexity) for exit to a competitor. The policy also allows for cases where non-HTTP1.1 compatible protocol utilization is required. I think this policy change is a huge step in the right direction. I also think that your argument that large ISPs utilize some format of efficient configuration in order to conserve dwindling IP numbers. Clayton Lambert Exodus Communications -----Original Message----- From: policy-request at arin.net [mailto:policy-request at arin.net]On Behalf Of Ted Pavlic Sent: Tuesday, September 05, 2000 6:56 PM To: policy at arin.net Subject: poorly thought out HTTP/1.1 mandate This is my first post on this list; I have only very recently subscribed. Because of this, I must apologize in advance of any of this has already been brought up. Personally, I disagree with the recent policy changes... http://www.arin.net/announcements/policy_changes.html made by ARIN. I feel that there has not been enough thought given to changes of this magnitude, and I think that the amount of argument in reponse to these changes (at least on the other groups to which I subscribe) backs me up on that. These things are causing me the most grief now that I have been forced to use name-based virtual hosts: * SSL * TLS (the server and client-side support (or lack of) of it) * FTP virtual hosts * Microsoft FrontPage Server Extensions * Old browsers which do not support HTTP/1.1 When I read the policy_changes.html, I get the odd feeling that large broadband ISPs are allocating more and more IPs for residential use and causing web hosting providers to give up many of their IPs. Why are web hosting providers being asked to give up their IPs when they are the ones who make up the Internet to which those residential users connect? By increasing the amount of real IPs given out to people who USE the Internet, ARIN is making it more difficult for those who make up the Internet to function! Rather than regulating we the web providers, why can't ARIN regulate those ISPs who are allocating huge amounts of IPs? What's wrong with forcing large cable and DSL providers to use the 10./8 class-A and use NAT? While this regulation seems radical, I would argue that it is MUCH less radical than the new regulations being made by ARIN. Personally I do not feel that large web hosting providers like the company which I represent are being well represented in ARIN. I worry that ARIN is being influenced too much by those who waste IPs rather than organizations who actually need them. I apologize if all of these points have already been brought up and answered, but I just think that ARIN's recent choices have been ridiculous and ever since I've been reading in other groups that many other people agree with me, I really felt that I needed to voice this. All the best -- Ted Pavlic NetWalk Communications tpavlic at netwalk.com From justin at gid.net Fri Sep 8 13:29:37 2000 From: justin at gid.net (Justin W. Newton) Date: Fri, 8 Sep 2000 10:29:37 -0700 Subject: Search Engines/IP restrictions/policy changes In-Reply-To: References: Message-ID: At 3:04 PM -0400 9/7/00, Brandon Ross wrote: >On Thu, 7 Sep 2000, Charles Scott wrote: > >> I don't know what the percentages of old browsers in use across the >> network are (perhaps someone can point us to some guess of 1.0 >> browsers), but I do know that we see some pretty old ones come into our >> sites. > >That is really the core of the discussion. It would be quite helpful if >some large webhoster out there could do a study to determine the number of >non 1.1 browsers are still in use. I don't work for a large webhoster >anymore or I'd find out myself. All I know is that it's been a long time >since I've seen anyone using an older browser, and that's amongst my >non-technical friends and family members, not necessarily geeks that are >always running the latest and greatest. On our regular web site we are seeing MUCH less than 1% of all browsers hitting our site are not http1.1 compliant. This is true as of about 3 months ago when we last looked at the data. We're not a large web hoster, but we do have a lot of traffic come through our web site. -- Justin W. Newton Senior Director, Networking and Telecommunications NetZero, Inc. From Clay at exodus.net Fri Sep 8 17:15:05 2000 From: Clay at exodus.net (Clayton Lambert) Date: Fri, 8 Sep 2000 14:15:05 -0700 Subject: Search Engines/IP restrictions/policy changes In-Reply-To: Message-ID: <200009082115.OAA00953@exoserv.exodus.net> As the oversight group for my company, I have had the opportunity to review thousands of log files for browser hits. As such, I am comfortable to say that an extremely small percentage of legitimate non-HTTP1.1 browser hits occur...less that 2% is a pretty reasonable number. This rare occurance of older browser hits is one of the reasons that I think non-HTTP1.1 browser clients is not an accepable reason for burning large portions of IP address space. Clayton Lambert Exodus Communications -----Original Message----- From: policy-request at arin.net [mailto:policy-request at arin.net]On Behalf Of Justin W. Newton Sent: Friday, September 08, 2000 10:30 AM To: Brandon Ross; Charles Scott Cc: policy at arin.net Subject: Re: Search Engines/IP restrictions/policy changes At 3:04 PM -0400 9/7/00, Brandon Ross wrote: >On Thu, 7 Sep 2000, Charles Scott wrote: > >> I don't know what the percentages of old browsers in use across the >> network are (perhaps someone can point us to some guess of 1.0 >> browsers), but I do know that we see some pretty old ones come into our >> sites. > >That is really the core of the discussion. It would be quite helpful if >some large webhoster out there could do a study to determine the number of >non 1.1 browsers are still in use. I don't work for a large webhoster >anymore or I'd find out myself. All I know is that it's been a long time >since I've seen anyone using an older browser, and that's amongst my >non-technical friends and family members, not necessarily geeks that are >always running the latest and greatest. On our regular web site we are seeing MUCH less than 1% of all browsers hitting our site are not http1.1 compliant. This is true as of about 3 months ago when we last looked at the data. We're not a large web hoster, but we do have a lot of traffic come through our web site. -- Justin W. Newton Senior Director, Networking and Telecommunications NetZero, Inc. From president at waasi.com Sun Sep 10 13:53:22 2000 From: president at waasi.com (Larry Johnson) Date: Sun, 10 Sep 2000 10:53:22 -0700 Subject: We disagree with recent restrictions on ip allocation aimed at attacking the "littlehosts" Message-ID: <200009101056.SM00326@proxy> I feel that several things have been overlooked in this host header policy. I would like to make several points about this. It is no problem at all to setup web sites via host header to allow many sites to use one IP address. In fact, we use shared IP hosting whenever possible. However, this also creates other issues. All domains that we host also have mail services. All mail systems that we have looked at handle shared IP mail delivery in a very sloppy manner. This also makes the mail system and other features much harder for the end user (customer) to use. For example most mail servers will require the user to log in as user at domain rather than just ?user?. Many mail clients don?t like this email address as the user ID and will return an error. Other features such as mailing lists, auto responders, etc don?t work nearly as well on shared IP. This results in a frustrated end user. This greatly increases the technical support needed to service the same number of customers as it would with fixed IP. In fact, shared IP hosting increases our costs. Even so, we do use it as we feel we must do our part to conserve address space. The fact is that shared IP ?host header? works well only for HTTP traffic. FTP, Telnet, and mail services are another story. Also when Apache and Microsoft makes the statement about doing this they have no knowledge of the mail, telnet, or FTP services used. They are speaking only of the webserver which is only a portion of the services we have to provide. We had a situation where there was about 40 sites on a single IP address. One of the sites violated the listing rules at one of the major search engines. The search engine blocked by IP address. All of the sites were kicked out of the search engine and could not list again because of the IP address. It is very hard to explain to a customer how and why this happened. The point is that just because shared IP hosting can be done it doesn?t mean that it is a great idea. A large portion of customers seeking web hosting are transferring their site from another host. The very first thing they need is an IP address to publish their site to. This requires a fixed IP address. Other areas where shared IP addressing won?t work are: High traffic sites. Too much traffic on a single IP address will cause a severe bottleneck at the router. The high traffic of a single site or too many sites on the same IP can create the same condition. This can be a technical nightmare that slowly builds over time. Sites that use SSL certificates also require a fixed IP address to maintain the integrity of the key exchange. I would like to share an experience that I had which demonstrates gross waste of IP address space. Recently I had cable modem service installed at my home. The service was provided by AT&T cable services. These cable services use a series of nodes around a given city. Each node serves 50 to 150 customers on average. Each computer connected to each node is setup with its own IP address. These companies should be introduced to modern technology. Only the live node would require a fixed IP. All computers behind it would not as their IP address would never be broadcast past the proxy. They could use non-routable IP addresses for every computer behind the firewall as they are never seen by the outside world. I have four computers in my home networked together. I told the installer that I had plans in install proxy software on the main system to allow Internet access by the other systems. I was told, ?You can?t do that! You will have to pay for additional IP addresses.? There s no practical reason for this. They would be able to easily identify me and my systems as any traffic would appear to come from my main system. I have to say that this left a bad taste for me. As we set there and squeeze the blood out of every IP we have hosting sites, it looks like these cable operators have IP space to burn. They certainly make no attempt to conserve. In closing I would like to say that I was very disappointed in the attitude of these cable operators. Even when an offer is made and technology exists that would conserve IP address space they insist on needlessly wasting them. This is a huge drain on address space. Unlike dial-up connections, the IP addresses used by cable operators are never freed so that another user can connect to the Internet using that same IP address. Just a couple of days ago I was looking at a website that we host. There were 128 unique IP visitors on the site. The thought that went through my mind was that we were servicing 128 people with one IP address and we are the ones that are being asked to conserve. On their end they each required an IP address just to connect. It doesn?t have to be that way. When it comes to sacrifice all users must share the burden. Again, we use shared IP hosting where it is practical to do so. However, there are some Internet industries that could conserve a huge number of IP addresses without impacting the quality of their service. These industries are making no attempt to conserve. As the saying goes, ?Making sacrifice is fine as long as somebody else is doing it.? It appears as these cable operators and the ?always-on DSL? folks live by this saying. I would hope that ARIN will give this whole matter further consideration. Cordially, Larry Johnson President The Western Association of Advanced Systems Integrators, Inc From dm at exanium.com Sun Sep 10 16:17:00 2000 From: dm at exanium.com (D.MAVROPOULOS) Date: Sun, 10 Sep 2000 15:17:00 -0500 Subject: What will be the result! Message-ID: <000001c01b64$637167f0$a486b218@homeserver> Eventually everyone that will need IP allocation will get it with better documentation and stronger potential proves (I assume it will be that way) 1) ARIN would never deny IP allocation if you have a plan that shows HTTP hosting accounts in which: a) 30% of your hosting accounts use SSL b) 3%-5% require IP Restrictions to communicate with their server or website c) 10% of existing accounts have the need for SSL . 2) SMTP hosting accounts require Unique IP. You can't use the same IP address for different companies. So why is everybody is worried? IS ANYBODY OUT THERE LOOSING BUSINESS OR WILL LOOSE BUSINESS BECAUSE ARIN DOESN'T WANT TO GIVE IPs? Lets face it. We have to live with the new policy, and ARIN should live with the overhead of every individual hoster that needs more IP addresses and will no hesitate to create any posible scenario in order to fulfill his/her needs. In the end there is always a solution. Dimitrios Mavropoulos VP of IT Department Exanium Co http://exanium.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlott at msncomm.com Tue Sep 12 13:33:11 2000 From: dlott at msncomm.com (David Lott) Date: Tue, 12 Sep 2000 11:33:11 -0600 Subject: Confusion over multi-homing Message-ID: <39BE68D7.482EB893@msncomm.com> I've read the current policy on ARIN's allocation of space and I must admit that I'm still confused. First, allow me to state the assumptions that I'm under. I understand the policy to state that if a business needs to multi-home and requires less space than a /20, then they should request this space from their ISP. I also understand that there are filters at the /20 boundaries in order to minimize the size of the routing table. Question: Doesn't this break multi-homing for end users that need less than a /20? For example, assume that the end user is connected to two regional ISPs (ISP-A and ISP-B). Neither of which have agreements with each other. However, they do share a common backbone with a national provider we will call ISP-Z. If ISP-Z has filters at /20 for both of the ISPs that it is connected to, then ISP-A address space will be the only space listened to on the ISP-A to ISP-Z link. The same would be true for the ISP-B address space only being listed on the ISP-B to ISP-Z link. This creates a situation where address space from ISP-B would not be advertised through ISP-A and in effect, breaks multi-homing. Consider a remote site attempting to reach the web server at the end user. DNS resolves the address to ISP-B address space. Also assume that the link between the end user and ISP-B is down. As the packet enters the national carrier ISP-Z's network, at some point the router will have to decide to send the pack on. If ISP-B is still advertising the remaining portion of their network (say at the /20 boundary) then ISP-Z will forward the packet to ISP-B. This is normal and proper for a single homed address space. However, if the end user had their own micro allocation, their address space would be advertised to both ISP-A and to ISP-B and in turn to the national carrier. As such, the destination network route would be dropped from the advertisement coming out of ISP-B and the only remaining route would be via ISP-A and the packet would still get there - if the end user had a micro allocation as per previous policy. Also, let us further look a situation where ISP-B is down. When the national carrier detects ISP-B is down it will remove that particular route from it's table. In the old way of doing things with micro allocation to multi-homed end users, ISP-A would advertise the address space from the end user. It is my understanding that under the current policy, ISP-A would have to advertise the address space allocated to the end user from ISP-B. If the address is less than a /20 and if the national carrier is filtering on a /20, wouldn't that cause the update to be dropped and thus not added to the routing table for the national carrier? I guess my confusion could be cleared up if someone could describe how, under the /20 policy, an end user requiring multi-homing and less than a /20 allocation would be able to survive one of their two ISPs going down (remember the AT&T and MCI outages?). Thanks, -- David Lott VP of Operations MSN Communications (303) 347-8303 From linda at sat-tel.com Thu Sep 21 16:31:54 2000 From: linda at sat-tel.com (Linda Werner) Date: Thu, 21 Sep 2000 16:31:54 -0400 Subject: Is there a policy Message-ID: <39CA7039.E717681@sat-tel.com> 1- Is there currently a policy that outlines a process to follow for denial of service attacks from address space designated by: a) ARIN b) other RIR c) pre RIR d) ISP - Tier 1 service provider 2- Is there currently a policy that outlines what to do with address space that has pre-existing high traffic volume (UDP packets sent 1 - 5 times per second) ex. a) DNS IP's where customer has relocated their DNS b) ISP has gone out of business Linda