I need to compute the rate of requests (requests/second) arriving to a IIS web server.
I am pretty sure that IIS maintains this information internally.
I have spent a reasonnable amount of time trying to find a way to configure IIS so that it write the rate down to its log file.
Guest what ? I was defeated.
So I have decided to get some help.
Does someone know how to make IIS exhibit the arriving requests rate ?
Since there isn't an accepted answer, here is how to count this information natively using IIS:
Open ISS and on your home server, access the "Logging" option as showed below. By default, ISS will log every HTTP request that it receives, the hour and some data about the origin also.
Windows Performance counter has this info too.
Related
UPDATE: This was my mistake, see my comment the below. Now Cloudfront works great with new settings.
Sometimes dns waits 600ms and than it will wait another half second which makes 90kb file waiting more than 1 second. Sometimes pingdom wait time shows even 1 second. If I try another test, it will go sometimes to 90ms all together.
I understand that first request will take more time because cloudfront needs first to take file from our server. I set cache time to 86400 s which means if it should get file from cache for whole 24 hours. But if I try pingdom just 2 hours after first test it will go again very slow.
The below are my results and settings. Am I missing something?
Most of the cases its the DNS that makes the delay because amazon is really scalable.
I had similar issues with my ISP and was able to resolve its quickly by changing the DNS servers.
Try changing your DNS to Google DNS
IP V4
8.8.8.8
8.8.4.4
IP V6
2001:4860:4860::8888
2001:4860:4860::8844
Google Public Dns Documentation
Or use OPEN DNS
208.67.220.220
208.67.222.222
OPEN DNS Documentation
CloudFront is not only scalable, it also eliminates bottlenecks, but aims to speed it up.
AWS CloudFront is a service with low latency and fast transfer rates.
Here are some of the symptoms that may be slower when using CloudFront.
(This includes most problems.)
The requesting Edge may be receiving a large number of requests.
The edge server closest to the client may be farther than the web host server.
(Geographic delay)
DNS lookups can be delayed.
There is not much of this possibility, but make sure the x-edge is in the "View in cloud front" state.
Cache may be missing.
Detailed troubleshooting is difficult because you do not know what the test is or what the condition is.
If logging is enabled, further troubleshooting is possible.
It is generally recommended to enable logging.
If you have any questions, please feel free to ask!
thank you.
I'd like to be able to see the web pages I'm serving on my Classic ASP site and how much data is sent out in preparation to start using GZip compression on the server. Running Windows Server 2003.
Is there a tool/utility/script to be able to watch or log traffic and tell the bytes going ou?
Diodeus is right in saying that you need a web log analyzer.
My current webhost uses SmarterStats which is has a large range of customisable reports available and is very good for looking at things like traffic volume etc as it'll visualise it all in the browser for you.
If you are running your own server then you can get a free edition which can be used with just one website - http://www.smartertools.com/smarterstats/free-web-analytics-seo-software.aspx
You need a log analyzer for IIS. Webtrends used to be quite popular. I used it a dog's life ago. Most use Google Analytics these days, but it's a different beast and tracks traffic, not data transfer volume. You really need to look at the server logs for that.
Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes).
However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about?
Have tried it on windows server 2008 iis and windows 7 iis
It really depends on the web browser you are using and how tab support in it has been programmed.
It is probably using a single thread to load each tab in turn, which would explain your observation.
Edit:
As others have mentioned, it is also a very real possibility the the webserver running on your localhost is single threaded.
If I remember correctly HTTP standard limits the number of concurrent conections to the same host to 2. This is the reason highload websites use CDNs (content delivery networks).
network.http.max-connections 60
network.http.max-connections-per-server 30
The above two values determine how many connections Firefox makes to a server. If threshold is breached, it will pipeline the requests.
Each browser implements it in its own way. The requests are made in such a way to maximize the performance. Moreover, it also depends on the server (localhost which is slower).
Your local web server configuration might have only one thread, so every next request will wait for the previous to finish
I would like to know how people dealing with logging across multiple web servers. E.g. Assume there are 2 webservers and some events during the users session are serviced from one, some from the other. How would you go about logging events from the session coherently in one place (without e.g.creating single points of failure)? Assuming we are using: ASP.Net MVC, log4net.
Or am I looking at this the wrong way - should I log seperately and then merge later?
Thanks,
S
UPDATE
Please also assume that the load balancers will not guarantee that a session is stuck to one server.
You definitely want your web servers to log locally rather than over a network. You don't want potential network outages to prevent logging operations and you don't want the overhead of a network operation for logging. You should have log rotation set up and all your web servers clock's synced. When log rotation rolls your log files over to a new file, have the completed log files from each web server shipped to a common destination where they can be merged. I'm not a .net guy but you should be able to find software out there to merge IIS logs (or whatever web server you're using). Then you analyze the merged logs. This strategy is optimal except in the case that you need real-time log analysis. Do you? Probably not. It's fairly resilient to failures (assuming you have redundant disks) because if a server goes down, you can just reboot it and reprocess any log ship, log merge or log analysis operations that were interrupted.
An interesting solution alternative:
Have 2 log files appenders
First one in the local machine
In case of network failure you'll keep this log.
Second log to a unix syslog service remotely (of course
a very consistent network connection)
I used a similar approach long time ago, and it work really well, there are
a lot of nice tools for analyzing unix logs.
Normally your load balancing would lock the user to one server after the session is started. Then you wouldn't have to deal with logs for a specific user being spread across multiple servers.
One thing you could try is to have the log file in a location that is accessible by all web servers and have log4net configured to write to it. This may be problematic, however, with multiple processes trying to write to the same file. I have read about NLog which may work better in this scenario.
Also, the log4net FAQ has a question and possible solution to this exact problem
While tracing the active connection on my db i found that some times the connections exceeds 100, is that normal?
and after few minutes it return back to 20 or 25 active connection
more details about my problem
Traffic on the site is around 200 visitor per day.
Why i am asking? because the default MaxPool in the asp.net connection string is 100
Also i am using Connection in the website IIS
That really depends on your site and your traffic. I've seen a site peek out at over 350 active connections to SQL during its peak time. That was for roughly 7,000 concurent web users, on two web servers, plus various backend processes.
Edit
Some additional information that we need to give you a better answer:
How many Web Processes hit your sql
server? For example are you using web
gardens? Do you have multiple servers
how many if you do? This is important because then you can calculate how many connections you can have by figuring out how many worker threads per process you have configured. Assume worse case, each thread is running which would add a connection to the pool.
Are you using connection pooling? If so your going to see the connections stick around after the user's request ends. By default its enabled.
How many concurent users do you have?
But, I think your going after this wrong, your having an issue with no free connections available in your pool. The first thing I'd look for is any leaked connections (connections being held open for longer then they should). For example passing a data reader up to the Web Page, could be a sign of this.
Next thing is to evaluate the default settings. Maybee you should run a web garden which should give you more connections, or increase the number of connections available.
The last thing I would do is try to opitmize queries like in your last question. Let's say you cut those queries in half, all you've done is bought yourself more time until more users come onto the system, and your right back here, only this time you might not be able to optimize that query yet again.
You're leaving out some details making it difficult to answer correctly but...
It depends, really. If you're not using connection pooling then each time a page is hit that requires access to the database a new connection is going to be opened. So sure, it could be perfectly normal.
I would also look into caching. Cache pages, cache query results, etc. You might be surprised how many times you go back to the database to get a list of US States...