Increase in number of requests form server cause website slow? - asp.net

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.

There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.

This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.

Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.

The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

Related

ASP.NET Web Services + IIS performance diagnostics [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
There is a need to find a performance bottleneck in server application under big load. Application consists of single services instance (.asmx) and some files that are requested over http from time to time. My plan to solve this problem is 1) get to exceptional situation when server starts failing somehow 2) analyze performance counters and logs in that moment of time to deduct what kind of calls caused that.
To start achieving this I've implemented a special client that issues both types of requests and made it repeat respective cycles indefinitely hoping at some point I'll get errors during WebMethod/GET url requests (NB - standard already existing solutions like JMeter and WAPT can't be used duo to complexity of services usage scenario). So far what I am observing is increased response time in service calls and some network timeout exceptions during files loading (using HttpClient that throws OperationCanceledException which is considered timeout according to - this thread). Btw, that's strange, because files are few kb in size, and service methods returns 5-10 mb of data per request. Thought "larger" requests are more likely to fail first.
Perfmon shows increased CPU load and absolutely no memory spikes/leaks. Request Execution Time counters are pretty random and looks irrelevant, Queue Lengths are always 0.
That said, looks like IIS handles my improvised DDoS well and at the same time makes testing approach ineffective (increased response times means more active requests in memory on test client which causes memory overflow at some point, and I'm already flushing data right after I receive it without doing anything with it).
More details : server machine is 4x3Ghz cores, 4 Gb RAM. I generate load of 50-100 requests per second which results in 10-20 Mb/sec bandwidth (test clients are situated on VM inside server's datacenter, 4 Gbps NIC). 30 minute testing session is ~10-30 Gb of pure data transfer between server and client.
How can I actually make Web Service/IIS go down?
Firstly, I wouldn't write my own load testing tool; there are plenty available. I've used JMeter (open source). You can use JMeter (and other similar tools) to send both POST and GET parameters, cookies and other HTTP headers - though admittedly, this does become challenging for complex cases.
Next, make sure your problem really is the server, and not the other infrastructure - network, routers, firewalls etc. all have maximum capabilities, and may be the root cause of the problem. Most of them have logging and reporting tools. For instance, I've seen tests report a throughput issue when they reached the maximum capacity of the firewall; the servers were not even close to breaking point. This happened because we had included a rather large binary file in the test cases, which normally would be served from a CDN.
Next, on the whole it's unlikely that serving static HTTP requests is the problem - IIS is really, really good at that. For the kind of hardware you mention, I'd expect to handle many thousands of requests per second. for static files.
In most situations, it's the dynamic pages that cause the problem - your .asmx. So, I'd ignore all the static files in the load testing, and focus on the .asmx. On the kind of hardware you mention, you probably need to generate many hundreds of requests per second if the asmxes are working properly.
Working on the assumption that your web server is tuned correctly, and the asmx scripts are reasonably performant, I'd expect to need at least twice the (CPU and memory) capacity from the test system as your server has to bring it to breaking point (this is based on my experience with JMeter, which is not as efficient as my web applications, but does make it easy to deploy multiple test clients). So in your case, I'd look for 2 machines matching your server specification.
With JMeter (and pretty much all the other load testing tools I've worked with), you can fairly easily use multiple machines as load test clients; I've also used Cloud-based load generation using JMeter.
I'm not totally sure why this rule of thumb is true - but I've observed it over multiple projects.

asp.net high number of Request Queued and Context switching

We have a fairly popular site that has around 4 mil users a month. It is hosted on a Dedicated Box with 16 gb of Ram, 2 procc with 24 cores.
At any given time the CPU is always under 40% and the memory is under 12 GB but at the highest traffic we see a very poor performance. The site is very very slow. We have 2 app pools one for our main site and one for our forum. Only the site is being slow. We don't have any restrictions on cpu or memory per app pool.
I have looked at he Performance counters and I saw something very interesting. At our peek time for some reason Request are being queued. Overall context switching numbers are very high around 30 - 110 000 k.
As i understand high context switching is caused by locks. Can anyone give me an example code that would cause a high number of context switches.
I am not too concerned with the context switching, and i don't think the numbers are huge. You have a lot of threads running in IIS (since its a 24 core machine), and higher context switching numbers re expected. However, I am definitely concerned with the request queuing.
I would do several things and see how it affects your performance counters:
Your server CPU is evidently under-utilized, since you run below 40% all the time. You can try to set a higher value of "Threads per processor limit" in IIS until you get to a 50-60% utilization. An optimal value of threads per core by the books is 20, but it depends on the scenario, and you can experiment with higher or lower values. I would recommend trying setting a value >=30. Low CPU utilization can also be a sign of blocking IO operations.
Adjust the "Queue Length" settings in IIS properties. If you have configured the "Threads per processor limit" to be 20, then you should configure the Queue Length to be 20 x 24 cores = 480. Again, if the requests are getting Queued, that can be a sign that all your threads are blocked serving other requests or blocked waiting for an IO response.
Don't serve your static files from IIS. Move them to a CDN, amazon S3 or whatever else. This will significantly improve your server performance, because 1,000s of Server requests will go somewhere else! If you MUST serve the files from IIS, than configure IIS file compression. In addition use expire headers for your static content, so they get cached on the client, which will save a lot of bandwidth.
Use Async IO wherever possible (reading/writing from disk, db, network etc.) in your ASP.NET controllers, handlers etc. to make sure you are using your threads optimally. Blocking the available threads using blocking IO (which is done in 95% of the ASP.NET apps i have seen in my life) could easily cause the thread pool to be fully utilized under heavy load, and Queuing would occur.
Do a general optimization to prevent the number of requests that hit your server, and the processing time of single requests. This can include Minification and Bundling of your CSS/JS files, refactoring your Javascript to do less roundtrips to the server, refactoring your controller/handler methods to be faster etc. I have added links below to Google and Yahoo recommendations.
Disable ASP.NET debugging in IIS.
Google and Yahoo recommendations:
https://developers.google.com/speed/docs/insights/rules
https://developer.yahoo.com/performance/rules.html
If you follow all these advices, i am sure you will get some improvements!

Enable dynamic compression in app within GBPS LAN?

I have a LAN of 1000 clients with speeds of 1 GBPS.
One application hosted in IIS 7.5.
Fact: A megabyte response is transferred between the server and the client in no more than 30 miliseconds. The connection is very fast.
Fact: Some clients have older PCs (windows xp, ie7, pentium4).
I think that dynamic compression is not needed in this case, becuase the problem is not the bandwidth but the clients computer performance.
Do you recommend to disable compression?
My pages have too much javascript. In every post I refresh the page with javascript, ajax and json. In some cases when the HTML is too big, the browser gets a little bit unresponsible. I think that compression is causing this problem.
any comments?
A useful scenario for compression is when you have to pay for the bandwith and would like to speed up the download of large pages, but this creates a bit of work for the client having to uncompress the data before serving it.
Turn it off.
You don't need it for serving pages over a high-speed LAN.
Definitely don't think you need the compression. But you are shooting in the dark here -- get yourself a http debugger such as the one included in google chrome and see what parts of the pages are slow.

Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.

Anyone using Memcached with ASP.NET on a distributed farm?

We have 22 HTTP servers each running their own individual ASP.NET Caches. They read from a read only DB that is only updated off peak hours.
We use a file dependency to invalidate the cache, prompting the servers to "new up" their caches...If this is accidentally done during peak hours, it risks bringing down our DB cluster due to the sudden deluge of open connections.
Has anyone used memcached with ASP.NET in this distributed form? It seems to me that it would offer a huge advantage of having to only build up one cache (and hit the DB 21 times less), while memcached would handle distributing it on each box.
If you have, do you place it on the same box as the HTTP boxes, or do you run a separate cache tier? How well does it scale, can we expect it to need powerful servers? Our working dataset is not huge (We fit it into 4 gigs of memory on each HTTP box just fine).
How do you handle invalidation?
Looking for experiences and war stories.
EDIT: Win2k3, IIS6, 64-bit servers...4 gigs per box (I believe, we may have upped it to 16 gigs when we changed to 64-bit servers).
"memcached would handle distributing it on each box"
memcached does not distribute or replicate a cache to each box in a memcached farm. The memcached client basically hashes the key and chooses a cache server based on that hash. When one of the memcached servers fail you will lose whatever cached items existed on that server, however, the client will recognize the failure and begin writing values to a different server. This being the case, your code needs to account for missing items in the cache and reset them if necessary.
This article discusses the memcached architecture in more detail: How memcached works.
Best practice (according to the memcached site) is to run memcached on the same box as your web server app or else you're making http calls (which isn't all that bad, but it's not optimal). If you're running a 64-bit app server (which you probably should if you're going to be running memcached), then you can load up each of the servers with loads of memory and it will be available to memcached. There's not much in the way of CPU resources used by memcached, so if your current app server isn't very taxed, it will remain that way.
Haven't used them together, but I've used them both on separate projects.
Last I saw the documentation explicitly said that sharing with the web server was ok.
Memcache really only needs RAM and if you take your asp.net cache out of the equation how much RAM is you web server actually using? Probably not much. It won't compete much with your web server for CPU and it doesn't need disk at all. You might consider segmenting off the network traffic (if you don't already) from the incoming web requests.
It worked well and was fast I didn't have any problems with it.
Oh, invalidation was explicit on the project I used it on. Not sure what other modes there are for that.
If you want to get replication accross your memcached servers then it maybe worth a look at repcached. It's a patch for memcached that handles the replication part.
Worth checking out Velocity, which is a distributed cache provided by Microsoft. I cannot give you a point-by-point comparison to memcached, but Velocity is integrated with ASP.NET and will continue to get more development and integration.

Resources