Application Takes too long to respond - asp.net

I'm using devexpress grid view to view and saving data,my problem is when the browser is first loaded application respond normally,after while (less than 5 minute),application response become very low and takes too long time to respond for any action ,i tried to to expand recycle time(regular time interval) on IIS but seems that the problem is not from IIS at all
also i made session time out after 60 minute and likewise there is no difference.
note that:
All of my request is Ajax calls.
When i deploy the application using development Server (in visual studio) application respond normally.
could any one please suggest me where the problem is??

Its hard to say where is the problem , but first thing came in to my head
It may be possible connection are opened but not closed, In general not disposing objects.
Make sure that the database connection pool settings are correct.
You may try to check the IIS logs to see if there is any unusual response rate i.e. errors (http 500, 404, etc)
also I will suggests fetching multiple result sets in one database call , try to reduce number of round trips to the database
It may possible that you have installed so many plugins in your browsers, and they are making it slow.
These are just few tips, hope it will help.

i investigate my problem further more i guess the problem is from browser, when i deploy application using Firefox issue totally disappeared and app respond normally,when using IE or chrome problem reveal

Related

Web App in Flaky Internet connection

Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,

Debugging requests which are 'stuck' in an IIS worker process

In case of TL;DR - I basically need guidance regarding what tools are available to debug requests which are issued to IIS and which stall inside a module.
I have a problem with an old ASP 2.0 app at the moment whereby it will periodically become unavailable and recycling the app pool (horrible as that may be) doesn't bring it back up 100% of the time.
So first of all it presents itself as requests entering the app pool and being trapped in state 'BeginRequest' in RewriteModule.
It is not a specific request which is always the first to experience this issue. The issue cannot be easily recreated either.
Eventually requests join this backlog and when it becomes 70+ deep the app pool fails to respond to pings from WAS and it forcibly recycles. Predictably it doesn't stop on-time and the old app pool is forced to stop. When the new app pool comes up it either works just fine or it instantly experiences the same issue as the outgoing one and requests begin to queue.
In issues like this all the official guidance is understandably focussed around looking at why the RewriteModule may choke.
I have validated my redirections and though complex there are no obvious issues with syntax (XML validates).
Likewise in inetmgr loading up the URL Rewrite Module seems to parse the configs fine and show them visually.
Basic stuff like permissions is all fine.
When the app is working normally I also used Failed Request Tracing/Logging to look at the request pipeline for a sample URL which stalled and I can confirm that there is no circular logic or weird errors presenting - the request seems to be handled just fine. This also showed me how high up the rewritemodule is invoked and from this I really don't see how the issue could be app-related as .NET isn't invoked at this point.
Annoyingly when an app pool is experiencing this issue and I can throw in requests which just stall Failed Request Tracing is no good because you actually need a request to get to the end of it's journey and fail otherwise it refuses to log anything out.
I resorted to taking process dumps of affected w3wp.exe's and running them through DebugDiag. Unfortunately the only thing I see is that threads are open accessing the rewritemodule but precious little about what they are stuck on.
As anyone else would do I've tried to track the start of the issue back to any recently installed patches or code changes but nothing matches. Likewise this is happening on 3x servers otherwise I would try reinstalling the rewritemodule. Other sites on the same server which invoke rewritemodule are unaffected.
Has anyone else experienced issues like this - the net seems to have relatively little info in this case. Perhaps you can recommend further debugging tools or approaches for IIS which I can adapt to this scenario? This is sort of a cry for help from someone more used to Apache/Nginx - sorry for the long post.

When is load for IIS7 too much?

At a customer of ours, candidates take tests with our software. If their test is finished, some calculations are done on the server. Now, sometimes, 200 candidates can end their test at the same time, so 200 calculations are done concurrent. The calculations all seem to go fine, but some calls to the IIS7 server get back a http error...
In Flex, this is the error:
code = "NetConnection.Call.Failed"
description = "HTTP: Status 200"
details = "http://servername/weborb.aspx"
level = "error"
Isn't Status 200 OK? So what's wrong here? Is it even a IIS7 problem? Of the 200 candidates 20 got this message. When restarting their test, everything worked well.
I have found this on the subject, but I wonder if this has anything to do with my problem (next week our customer will do some stresstests and I'll already asked them to test test if solution in this post works).
Some questions:
Can it be that IIS7 blocks certain http calls when load is to much?
How can you know that IIS7 blocked those calls because of too much load?
Is it possible to configure these things?
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
Application: Flex, WebORB, ASP.NET, IIS7 en SQLSERVER2008. Server is Windows Server 2008.
This problem seems very familiar to me. We have a bunch of flex widgets which are connected to one server-side and sometimes it also returns "Netconnection.Call.Failed". For us, it seems that the IIS(and MSSql behind) cannot process all the requests in time, hence some of them are timed out.
Try to check how much time each request/all requests take, then check your timeout setting.
There are plenty of things you can do to fine tune the performance of both your server and IIS.
To answer your questions:
A maximum concurrent connections limit (plus other settings) in IIS 7 can be configured by selecting your website in IIS Manager and selecting 'Advanced Settings' in the Actions Pane on the right. Though by default this is a number much higher than 200.
Looking in the IIS log files, specifically the return status codes can give you an indication of what went wrong. Equally the Windows event log should also tell you of any exceptions that have occurred.
I suggest you turn on load balancing between instances of IIS, or consider using nginx for load balancing.
also set the limit of 200 User higher. Since in IIS, each user connect to your application is count as 1 instance of user, at some point you will use up 200 user slot. This is the default setting and you can set it to much higher number.
Also set your time out to a higher number.
Also look at Comet if you trying to call consistent result like live data (stock, weather, chat, shoutbox)
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
A queue isn't that hard to put together with a batch-processing script running off Windows' scheduled tasks. Just dump results into a SQL DB, or if you're really lazy, insert rows in SQL with a serialized array, then have them "come back" to see their results. "Please wait, your results are still processing."
It'd take you less time than waiting around on SO for a silver-bullet answer in my opinion.

Web Services error on first attempt only

I've created a Web Service which is accessed by 3 other websites. In 1 instance, after user entered some data and click a submit button it goes off and does it's thing and return true or false.
The problem is when i click submit initially (after a period of inactivity) i get the error
"The remote name could not be resolved: (mydomainname.goes.here)"
However, if i clicked the submit button immediately after, it works fine, and will work again until another period of inactivity.
Any suggestions where to begin.
EDIT: New data - I was ready to accept it being a DNS issue at the server site since i tested it from 2 locations (home and office with different ISPs) and the problem seemed consistent, however at the last minute i tried it from my sprint cellular data connection and never has this issue. The web service responds and works well everytime i try it here, regardless of the time that has elapsed between trials. - So i'm doubtful that it's DNS issue again.
The remote name could not be resolved
Something wrong with your DNS/proxy configuration
I've run into a similar problem before when calling web services on Apache from a .NET CF client. Try turning KeepAlive off on your web service client, sounds like a very similar pattern compared with what I was seeing. The web service call would work on the first try, fail if there was an interval of around 10 seconds between calls, and work every time if there was a 1 second interval. I think it was a problem with the Apache configuration.

browser timeouts while asp.net application keeps running

I'm encountering a situation where it takes a long time for ASP.NET to generate reply with the web page (more than 2 hours). It due to the codebehind running for a while (very long, slow loop).
Browser (both IE & Firefox) stops waiting for the reply (after about an hour) and gives generic cannot display webpage error (similar to what you would see if you'd try to navige to non-existing server).
At the same time asp.net app keeps going (I can see it in debugger) and eventually completes.
Why does this happen? Are there any settings in web.config to influence this? I'm hoping there's a timeout setting that I'm missing that's causing this.
Maybe a settings in IE or Firefox? But I think they wait while the server is keeping connection alive.
I'm experiencing this even when I launch app in debug mode (with compilation debug="true") on my local machine from VS (so it's not running on IIS, but on ASP.NET Dev Server).
I know it's bad that it takes so long to generate the page, but it doesn't matter at this stage. Speeding it up would take a lot of extra work and the delay doesn't really matter. This is used internally.
I realize I can redesign around this issue running logic to a background process and getting notified when it's done through AJAX, or pull it to a desktop app or service or whatever. Something along those lines will be done eventually, but that's not what I'm asking about right now.
Sounds like you're using IE and it is timing out while waiting for a response from the server.
You can find a technet article to adjust this limit:
http://support.microsoft.com/kb/181050
CAUSE
By design, Internet Explorer imposes a
time-out limit for the server to
return data. The time-out limit is
five minutes for versions 4.0 and 4.01
and is 60 minutes for versions 5.x, 6,
and 7. As a result, Internet Explorer
does not wait endlessly for the server
to come back with data when the server
has a problem. Back to the top
RESOLUTION
In general, if a page does not return within a few
minutes, many users perceive that a
problem has occurred and stop the
process. Therefore, design your server
processes to return data within 5
minutes so that users do not have to
wait for an extensive period of time.
The entire paradigm of the Web is of request/response. Not request, wait two hours, response!
If the work takes so long to do, then have the page request trigger the work, and then not wait for it. Put the long-running code into a Windows service, and have the service listen to an MSMQ queue (or use WCF with an MSMQ endpoint). Have the page send requests for work to this queue. The service will read a request, maybe start up a new thread to process it, then write a response to another queue, file, or whatever.
The same page, or a different, "progress" page can poll the response queue or file for responses, and update the user, assuming the user still cares after two hours.
For something that takes this long, I would figure out a way to kick it off via AJAX and then periodically check on it's status. The background process should update some status variable on a regular basis and store it's data in the cache or session when complete. When it completes and the browser detects this (via AJAX), have the browser do a real postback (or get by changing location.href), pick up the saved data, and generate the page.
I have a process that can take a few minutes so I spin off a separate thread and send the result via ftp. If an error occures in the process I send myself an error message including the stack trace. You may want to consider sending the results via email or some other place then the browser and use a thread as well.

Resources