ARIN Justified...

Clayton Lambert Clay at exodus.net
Wed Jan 10 20:38:11 EST 2001


You just acknowledged my point.  I am aware of what happens when you add RAM
and CPU.  But that doesn't affect the "all things being equal" scenario.

One again, this is slightly off topic.  The policy requirements are not the
same as the particular methodologies of server configuration...If the policy
gets updated, then the technical items you are discussing are applicable.

-Clay

-----Original Message-----
From: owner-vwp at arin.net [mailto:owner-vwp at arin.net]On Behalf Of Simon
Sent: Wednesday, January 10, 2001 5:17 PM
To: Virtual IP List
Subject: RE: ARIN Justified...


Are you a system admin? how well do you understand how things work on unix
system? anyways, the more RAM you
have the bigger your process table can have. The more CPUs you have, the
more processes you can run at the same
time. The advantage of running separate daemons per customer is 1) security
and 2) freedom. I don't know where you
got the extra overhead from. Current apache is not threaded and does
preforking. In either case, you will have overhead
unless you have enough preforked processes.

-Simon

On Wed, 10 Jan 2001 17:06:30 -0800, Clayton Lambert wrote:

>There isn't a huge advangate to running multiple daemons on the same
>box...there is only X amount of proc available regardless of the amount of
>daemons you run...Additionally, there is a per-daemon overhead hit (in
proc)
>that you don't have to deal with when you run single daemons per server.
>
>-Clay
>
>-----Original Message-----
>From: owner-vwp at arin.net [mailto:owner-vwp at arin.net]On Behalf Of Simon
>Sent: Wednesday, January 10, 2001 4:59 PM
>To: Virtual IP List
>Subject: Re: ARIN Justified...
>
>
>FYI, you can't run two separate apache daemons on the same port without two
>unique IPs.
>
>-Simon
>
>On Wed, 10 Jan 2001 18:02:05 -0500, Bill Van Emburg wrote:
>
>>Simon wrote:
>>>
>>> We have servers with over 5-10 million hits and parse logs daily at
>night. It takes about 2 hours to parse the logs per
>>> machine. Mostly due to resolving IPs. To get just the bandwidth, 10
>million hits log file can be parsed in matter of
>>> minutes. So, you just need better tools ;-) As for other traffic such as
>FTP, there is a log file which can be parsed, too.
>>> We actually do this for anonymous FTP. I don't know who charges for
>POP/SMTP traffic, but same method can be
>>> implied here to calculate the bandwidth, too. It's matter of having
right
>tools for the job. They are out there or you can
>>> have a programmer write custom set for your needs. Keep in mind, I'm
>referring to virtual web hosting, not dedicated.
>>>
>>
>>Attempting to parse all those different log files and consolidate the
>>info is certainly not elegant, nor a particularly great use of CPU, and
>>again, it does not tell you the actual bandwidth usage, merely the
>>application-level data.  It gets worse, when you consider that each of
>>our shared hosting customers has their own, separate web server, ftp
>>server, etc. running.  Even in shared hosting, each of our customers has
>>their own distinct server processes.  This very quickly becomes a
>>logistical nightmare, as well as a larger problem to parse.  Finally,
>>we're talking about more than double the hits you are describing.  It is
>>distinctly possible that the tool problems we're having are still
>>related to sheer volume.
>>
>>Something I didn't mention before: we also have to measure streaming
>>media bandwidth consumption.  Correct me if I'm wrong, but I'm not aware
>>of a way to do that from log files, for any existing streaming server.
>>--
>>
>>				     -- Bill Van Emburg
>>				     	Quadrix Solutions, Inc.
>>Phone: 732-235-2335, x206		(mailto:bve at quadrix.com)
>>Fax:   732-235-2336			(http://quadrix.com)
>>		The eBusiness Solutions Company
>>
>
>
>
>






More information about the Vwp mailing list