How are web servers best loaded - http

If a web server is going to serve out say 100GB per day would it be better for it to do so in 10,000 10MB sessions or 200,000 500kB sessions.
The reason for this question is I'm wondering if there would be any advantage, disadvantage or neither to sites that mirror content to allow clients to exploit HTTP's start-in-the-middle feature to download files in segments from many servers. (IIRC this is a little like bit torrent works)

I suppose that not only the size of the session matters, but the duration of it is very important. I would guess that 200,000 small sessions will live shorter than 10,000 big ones. As soon as a session is completed, resources are freed to be used again. I would optimize it so that a session lasts as short as possible.
Have also in mind that a single server can't have more than a couple of hundred of sessions at the same time (100 - 200 is a safe number).

There's usually some overhead associated with each session - the cost of creating a TCP connection for example, or HTTP headers (if you haven't already included them in the 100GB) - so ostensibly it'd be better to use larger sessions. But also think about it from the clients' point of view: by using multiple smaller sessions they can run downloads in parallel and possibly get their content faster. So the actual "best" setup will probably be some intermediate session size, which of course depends on what kind of content you're serving (large files or small ones, streaming or static) and how important you think speed is. There's no one-size-fits-all answer.

100 GB/day is not a number you need to worry about. It's an average of slightly over 1 MBYte/sec. You could get that over old Ethernet, to put that in perspective. Now, your actual peak throughput will be a lot higher, perhaps 10 MByte/s. Still, this is no problem at all for a modern server. So, I don't think you need those multiple servers, so that's no reason for splitting the 10MB downloads. And if you're downloading from a single server, why would you insert 19 disconnect-and-reconnect sequences in the middle of a 10MB download?

Related

How many safe parallel connections can be made to a server

I am trying to make a simple general purpose multi-threaded async downloader in python.How many parallel connections can be generally be made to a server with minimum risk of being banned or rate limited.
I am aware that network will be a limiting in some cases but lets assume in this case that network isn't an issue in this case for the sake of discussion.I/O is also done asynchronously.
According to Browserscope , browsers make a maximum of 17 connections at a time.
However according to my research , most download managers download files in multi-part and make 8+ connections per file.
1.How many files can be downloaded at a time ?
2.How many chunks for a single can be downloaded at one time ?
3.What should be the minimum size of those chunks to make it worth creating the overhead of creating parallel connections ?
It depends.
While some servers tolerate a high number of connections, others don't. General web servers might be more on the high side (low two digit), file hosters might be more sensitive.
There's little to say unless you can check the server's configuration or just try and remember for the next time when your ban has timed out.
You should however watch your bandwidth. Once you max out your access line there's no gain in further increasing the connections.

Is it realistic for a dedicated server to send out many requests per second?

TL;DR
Is it appropriate for a (dedicated) web server to be sending many requests out to other servers every second (naturally with permission from said server)?
I'm asking this purely to save myself spending a long time implementing an idea that won't work, as I hope that people will have some more insight into this than me.
I'm developing a solution which will allow clients to monitor the status of their server. I need to constantly (24/7) obtain more recent logs from these servers. Unfortunately, I am limited to getting the last 150 entries to their logs. This means that for busy clients I will need to poll their servers more.
I'm trying to make my solution scalable so that if it gets a number of customers, I won't need to concern myself with rewriting it, so my benchmark is 1000 clients, as I think this is a realistic upper limit.
If I have 1000 clients, and I need to poll their servers every, let's give it a number, two minutes, I'm going to be sending requests off to more than 8 servers every second. The returned result will be on average about 15,000 characters, however it could go more or less.
Bearing in mind this server will also need to cope with clients visiting it to see their server information, and thus will need to be lag-free.
Some optimisations I've been considering, which I would probably need to implement relatively early on:
Only asking for 50 log items. If we find one already stored (They are returned in chronological order), we can terminate. If not, we throw out another request for the other 100. This should cut down traffic by around 3/5ths.
Detecting which servers get more traffic and requesting their logs less commonly (i.e. if a server only gets 10 logged events every hour, we don't want to keep asking for 150 every few minutes)
I'm basically asking if sending out this many requests per second is considered a bad thing and whether my future host might start asking questions or trying to throttle my server. I'm aiming to go shared for the first few customers, then if it gets popular enough, move to a dedicated server.
I know this has a slight degree of opinion enabled, so I fear that it might be a candidate for closure, but I do feel that there is a definite degree of factuality required in the answer that should make it an okay question.
I'm not sure if there's a networking SE or if this might be more appropriate on SuperUser or something, but it feels right on SO. Drop me a comment ASAP if it's not appropriate here and I'll delete it and post to a suggested new location instead.
You might want to read about the C10K Problem. The article compares several I/O strategies. A certain number of threads that each handle several connections using nonblocking I/O is the best approach imho.
Regarding your specific project I think it is a bad idea to poll for a limited number of log items. When there is a peak in log activity you will miss potentially critical data, especially when you apply your optimizations.
It would be way better if the clients you are monitoring pushed their new log items to your server. That way you won't miss something important.
I am not familiar with the performance of ASP.NET so I can't answer if a single dedicated server is enough. Especially because I do not know what the specs of your server are. Using a reasonable strong server it should be possible. If it turns out to be not enough you should distribute your project across multiple servers.

How to maximize downloading throughput by multi-threading

I am using multi-threading to speed up the process of downloading a bunch of files from the Web.
How might I determine how many threads I should use to maximize or nearly maximize the total download throughput?
PS:
I am using my own laptop and the bandwidth is 1Mb.
The data I want is the webpage source code of coursera.com
There are much more factors than only number of threads if you want to speed up downloading files from network. Actually I don't believe that you will achieve this expect that there are some limitations you haven't described (like max bandwidth per connection on server side, you have multilink client and you can use different links do download different data, you want to download different parts from different servers, or similar).
In usual conditions having multiple threads to download something will slow the process. You will need to maintain couple of connections and somehow synchronise data (expect if you will download e.g. different files at the same time).
I would say that in "ordinary" conditions much bigger limitations are your bandwidth limit so using more threads will not make downloading faster. You will in this case share your whole bandwidth to many connections.

Client and Cache configuration for Oracle coherence

I have the specific scenario for which we want to use Coherence as sitributed cache. Which I am gonna describe here.
I have 20+ standalone processes which are going to put the data in cache continuously. the frequency of all of them differs, though thats not a concern.
And 2 procesess which will be reading data from those cache.
I dont need any underlying db except for the way which coherence provide. Data will be written to the cache and read from the cache.
I have 4 node cluster at my disposal (cost constraint whatever) and the coherence cluster will be on different boxes (infra constraint whatever) and both the populating portion of the cache and the reading part will be on differnt nmachines.
The peak memory size of the cache daily will hover around 6 GB max, min being 2 GB.
Cache will have daily data only and I will have separate archiving processes to simulatneosuly keep archiving it also. the point is that cache size for now will have this size only. Lets say I am gonna keep the date out of key equation.
Though Would like to explore if I can store more into those 4 nodes. Right now its simple serialization, can explore other nbinary formats. Or should I definietly at this size of the cache?
My read and write operations are fairly spread out in the day. Meaning the read and write will keep on happening by those 2 reading clients and 20+ writing clients. Its not like one of them is more. Though there is a startup batch process in all of the background process which push more to the cache than the continuous pushing afterwards. But continuous pushing pushes fair amount of data too.
Now my questions regarding those above points (and because of some confusion also)
The biggest one is somebody told me that I an have limited number of connection depending on the nodes we have bought. so he said if its 4, you ideally should have 4 connections only at the max. So, develop a gatekeeper kind of application and what not. Even if we use TCP Extend. Now from my reading so far, I dont think so. Is it? The point is dont wanna go that way if its really is not a constraint.
In other words is there limit on connection through Proxy Service dependeing on the nodes in the cluster?
Soemwhat related to above only. at the very max, I am going to get some penalty on the performance while pushing to cache only if I go the Extend way, right?
Partioned cache/near cache. As the reading time as well as the most update cache both are extremely critical. (the most imp question i have).
Really want to see the benefit which can be obtained from going to POF instead of lets say serialization/externalizatble/protobuf. Can coherence support protobuf out of the box? (may be for later on)
There's no technical limitation to the number of connections a Coherence Extend proxy can support except normal network and hardware resource constraints. You will have to ask an Oracle sales person if there are licensing limitations.
There is some performance impact from using a proxy because you are adding an additional network hop (client to proxy to cluster). If you use POF serialization then the proxy does not have to serialize/deserialize values. It can just pass the object through in its serialized form. In most applications the performance impact of using a proxy is tiny because Coherence is highly optimized for network speed. You are not required to use a proxy unless your clients are .NET or C++, but there are advantages of isolating client performance from impacting the cache.
Near cache will improve retrieval performance dramatically if there a number of frequently retrieved items for a client since they will be found in-process.
POF offers performance improvements based on faster serialization/deserialization and more compact storage. It is always best to try with test data based on your real production data and measure the difference yourself. Coherence does not support protobuf out of the box.

Best way to determine the number of servers needed

How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.

Resources