Recently we have upgraded our main product to be an ASP.NET 4.0 Project (upgraded from 2.0). This project is the main source of traffic to the web service, the other forms have not changed in this release. The web service has been running without issue for a couple years now.
Following the release of the 4.0 environment to our main product our web service starting to die after about an hour and would do nothing but timeout until we restarted the worker process.
We have changed it so that the web service was also in the 4.0 Framework. However this did nothing. Other possible attempts we have tried have also failed such as making it a service instead of a web reference, and updating our certificate validation to also use the proper/current framework techniques.
The worst part is that there is no log in the event viewer being created so I have no leads as to what the problem is. We have applied a band-aid solution of recycling the app pool every 20 min (the recompile takes a second as the web service is very small) and it seems to be holding for now, but we would rather fix the problem than rely on this.
So does anyone have any additional idea/suggestions as to where our problem may be coming from? Or anyone experience anything similar?
Both projects exist in the same web farm and all machines are using IIS 6 32 bit.
Thanks!
edit- Some more info, The web service has a couple basic functions:
1 - accepts XML documents loads them into a dataset then updates internal DBs with the information sent - and simply returns true
2 - receives a request for processed data through XML and queries the DB for it builds XML response and sents it out
3 - receives a confirmation that the data requested in step 2 was recieved and deletes it from the DB
4 - hits a function that updates a DB so we can monitor some applications on our clients systems.
First of all, can you reproduce the problem on a dev box (using some load testing platform to simulate production load)?
If so, then it sounds like an issue with your code. You could then take it a step further and profile the code on the dev server to pinpoint the issue.
If that isn't an option, I would fire up Process Explorer and watch the server resources over time. I'm not sure what your service does, but it sounds like it could be spinning up threads and then not cleaning up after itself.
It might help if you posted some sample code so we could see what could've been affected by the change from .NET 2.0 to 4.0.
Related
Using performance monitoring tool "New Relic" I am seeing occasional (but too many) long delays in the "AcquireRequestState". I am talking about 10, 20 second delays, sometimes minutes.
I know we have not written our own event handlers for this event.
Where do I even begin looking for the cause of these delays? The little information I have found so far on msdn has not been helpful.
My team saw this "AcquireRequestState" delay reported by NewRelic earlier today when one of our ASP.NET applications was having performance problems on a particular page.
The root cause turned out to be a change to a stored procedure that we'd recently deployed to our SQL Server database, which was inadvertently causing that stored procedure to take a very long time to execute. The stored procedure was called as a part of displaying the page that was having the performance issue.
We were able to resolve the issue by identifying and fixing the performance problem with the stored procedure. The "AcquireRequestState" issue from NewRelic turned out to be irrelevant; it was a symptom of the problem, rather than the cause.
This was on an ASP.NET 4.5 application running on Windows Server 2008.
tl;dr: The "AcquireRequestState" delay reported by NewRelic may be a side effect of some other problem that's causing one or more of the pages and/or AJAX requests in your ASP.NET app to take a long time to load.
Try to apply Hotfix Rollup 2828841 on the server.
Issue 6
Symptoms
When you send many concurrent requests that have the same
SessionId to an ASP.NET 4.5 web application, some requests may freeze
at the RequestAcquireState stage unexpectedly.
Resolution
After you apply the hotfix, the hotfix makes sure that the
EndRequest event will always trigger.
This hotfix applies to
Windows 7 Service Pack 1 (SP1),
Windows Server 2008 R2 SP1,
Windows Server 2008 Service Pack 2 (SP2), and
Windows Vista SP2.
I suspect your stored proc change might be highlighting a slightly different problem in terms of session locking, we had roughly the same being reported for a different scenario, I'd seriously like to encourage you to test out the async session provider as mentioned here and making sure you use the concurrent requests per session app setting :
https://stackoverflow.com/a/55331786/7581050
Ultimately any long running process (in your case the stored proc change) is blocking any further requests for that session. Since this "block" is happening in a different part of the IIS pipeline, NewRelic simply records it as "AcquireRequestState"
PS: I know this answer comes quite late but I've finally found something that solved a similar problem for us and I suspect will help many people in the future.
We are referencing a 3rd party proprietary CLI DLL in our .net project. This DLL is only an interface to their proprietary C++ library. Our project is an asp.net (MVC4/Web API) web application.
The C++ unmanaged library is rather unstable. Sometimes it crashes with e.g. dangling pointers. We have no way of solving it, and using this library is a first-class customer requirement.
When the application crashes, the application pool in IIS doesn't respond anymore. We have to restart it, and doing so takes a couple minutes (yes, that long!).
We would like to keep this unstable DLL from crashing our application. What's the best way of doing it? Can we keep the CLI DLL in a separate AppDomain? How?
Thanks in advance.
I think every answer to this question will be some kind of work around.
My workaround would be to not interact directly with the DLL from your web application.
Instead write your requests from the web application to either a Message Queue or a SQL table. You can then have another application such as a Windows Service which reads the requests, interacts with the DLL and then writes the results back for your web application to read.
I'm not saying that SQL / Message Queues are the right way, I'm more thinking of the general process flow.
I had this exact problem with a third party library that accessed protected memory for purposes of interacting with a hardware copy protection dongle. It worked fine in a console or winforms app, but crashed like crazy when called from an IIS application.
We tried several different things, some of which are mentioned in other answers on this page. But ultimately, the best solution for us was to us a very old technology - .Net Remoting. I know - it's somewhat frowned on these days. But it fit this particular need quite well.
The unstable code was placed in a Windows Service application. The web application made remoting calls to this service, which relayed the commands to the third-party library.
Now I'm sure you could do the same thing with WCF, sockets, etc. But remoting was quick and easy to setup, and since we only talk to the same server it works without opening any ports. It just talks on a named pipe.
It does mean a second service to install besides the web application, but that was acceptable in my particular use case.
If you did something similar, and the third-party code actually crashed the service, you could probably write some code in your main application to bring it back up.
So perhaps a process boundary is more useful than an App Domain when you have unstable code to wrangle.
I would first increase the IIS process recyling rate, maybe the the DLL code fails after a certain number of calls, or after the process reaches a certain amount of memory usage.
You can find information on the configuration of IIS 7.0 recycling options here: http://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx
In your case I would recycle the process at a specific time, when you know there is less load on the application. And after a certain number of requests (lower than the default) to try and have "fresh" process most of the time.
The recycling process is graceful in the sense that the the old process is not terminated until the one that will replace it is ready, so there should be no noticeable downtime.
More information about the recycling mechanism here: http://technet.microsoft.com/en-us/library/cc745955.aspx
If the above does not solve the problem I would wrap the calls in my own code that manages the unstable DLL execution.
This code should recover from the failures for example by repeating the failing calls until a result is obtained and failing with a graceful error if it is not possible after a number of attempts.
Internally the calls to the unstable DLL could be made in a spawned thread or even the code could be in an new external executable that you could launch with Process.Start.
This last option has more overhead but it might be your only option. See this SO question for more information on this: How do you handle a thread that has a hung call?
I suggest following solution.
Wrap this dll with another web application. Can be one of the following ones. Since you already use web api, it is most suitable for you.
Simple ASMX Web Service
WCF Service
Asp.Net MVC - WEB Api Service
Control your p-invoke code so that you do not have any bug? See following articles.
The Black Art of P/Invoke and Marshaling in .NET
P/Invoke Revisited
Publish this application to IIS with different application pool.
Use standard techniques suggested before like. I suggest configure recycling IIS for both memory and scheduled times.
IIS process recycling rate
How to limit the memory used by an application in IIS?
Throughout the process of developing my Application, the first-response time has got worse and worse, It is now taking 10 minutes to load! I am using Web-Deploy to speed up publishing my changes, and from what i've read on MSDN, i understand that this delay is due to compilation and loading assemblies.
It's an ASP.NET MVC3 Application which uses EF CodeFirst, MVC-MiniProfiler etc. Im wondering if its one of these assemblies that is slowing things down.
Is there a way to track down the long running process plaguing my development/testing process?
As a side note, the issue is nowhere near as bad in the Azure Emulator.
Using Windows Azure SDK 1.4 and later, you have the option to enable Profiling for you application (beside the IntelliTrace). You can read about some of the options available (in 1.5) from my blog post here where you will also find a good screenshot showing the option to enable either IntelliTrace or Profiling.
The trick is that you can only have one of them running (either ItelliTrace or Profiling). So I suggest you first run the ItelliTrace and inspect ItelliTrace logs for any exceptions during your application execution. Then do another deployment using Profiling to catch which are the most time consuming methods.
Please note that enabling IntelliTrace /Profiling is only accomplishable during deployment process, and cannot be changed with simple WebDeploy, so you'll have to make at least two deployments for test.
It's hard to say what the slowdown is - as Awais mentioned, IntelliTrace is your friend. However, the delay might be unavoidable (I have seen this a number of times). If this is the case, you can add a startup script that will "prime" IIS, preventing the problem when the first user hits the site.
I have downloaded TheWorldsWorstStackOverflowClone. One of the project is called TheWorldWorsts.ApiWrapper, which basically is the core of accessing the API. There is a class called ApiProxy.cs, which has all the methods for the API call. This is good.
Now what I want to do is I am trying to collect data from this API interface and store it in a database. I know the limit to the API call is 10k per day. I.e: I want to be able to call the method in the ApiProxy class 10k times per day, done automatically. How can I do this?
The non-automatic way would be to create a dummy site where when every time I access the site it does all that process, but this not efficient. It seems that I have to write some kind of a scheduler by deploying a web service, but that is too complicated... as explained here. Any other simpler methods?
A Windows Service or Desktop App might be a better solution than a web application. You are not deploying a web service, you are consuming one using a proxy class, and this does not require you to have a web server or a web site.
You could use a web application to control and monitor progress as your service downloads data, but the actual work is long running and needs to be offloaded to another process or thread so you can tell the user whats going on.
Check out this one
http://stacky.codeplex.com/
This looks what you need, though I am facing some debugging issues, but hope you can figure it out.
I have an ASP.NET application that is consistently using 75% - 100% of the CPU on a production server. How can I profile the application to figure out what part of the code is using up the most CPU? I have looked at a couple of different tools (Xte Profiler, EQATEC, dotTrace), but they all seem to want you to load and run the application within their tool. It seems to me that they want you to load up the application in their tool and run tests locally (not in production). I want to profile the application while it is running in production with people hitting it to see what is actually going on. Is this possible?
I am a newbie to application profiling so forgive me if I have missed something obvious or am not thinking about this correctly.
Thanks,
Corey
Sam Saffron (one of the StackoverFlow creators) has written a great command-line tool a while ago, but unfortunately has abandoned it.
A friend of mine forked the code to make it work in 2015:
https://github.com/jitbit/cpu-analyzer
(the page has a link to Sam's post explaining how to use it)
The great thing about this tool (besides "no-install required" portability, cmd-line interface, etc etc) is that APM packages like NewRelic etc only monitor http-requests. If your app has some background threads - they won't help much.
You should consider taking a memory dump on the production server while it's experiencing high CPU. Check out ADPlus and taking a hang dump on the asp.net process. This can then be analyzed with Windbg or other tools.
I just went through a similar experience where our production servers were experiencing excessive CPU load - a scenario we could not recreate locally or in test/staging environments. It had nothing to do with the database (database CPU was normal). Analyzing the dump file is what clued us in on what was causing the problem (excessive compilation of regex objects by some library we were using).
This answer would be incomplete without Tess' blog, so here's the link.
My guess it has to do with long running database queries rather than the ASP.net application itself. In my experience 9 times out of 10 this is what I see and this takes the APPLICATION server down to a crawl as resources are consumed and the app has to wait for each query to finish to move on. Take a look at SQL profilier on the DB server and see if there are any queries that are taking a long time to execute.
It could be as simple as adding an index to a column or some other small minor optimizations. Once you know the query, you can then also go back to your code and tweak that section as well.
For those who stumble upon this question still, it really depends on what you are trying to accomplish.
If a server is running that high on CPU, odds are, a standard profiler will bring it to a grinding halt due to it's additional overhead.
There are actually three different types of profilers. Standard profilers, lightweight transaction profilers, and APM tools. You can read more about this in my blog post that discusses all 3:
.NET Profilers: 3 types and why you need all of them
It's certainly possible to profile ASP.NET with the EQATEC Profiler. See:
Profiling ASP.NET websites with EQATEC Profiler
EQATEC Profiler instruments your app in a separate step that enable the app itself to collect it's own profiling info, and the profiler then merely displays that timing data afterwards.
That means that you can run your instrumented ASP.NET app completely independent of the profiler itself.
You could e.g. instrument your app, mail it to your test site in India, have them run it on their server for some days where it will generate timing reports all on it's own, and have them mail back those reports to you, which you can then view in the profiler. Pretty neat.
Note: To have the profiled app generate timing snapshots "on it's own" it must know when to generate them. By default this is when the method Application_End is called in an ASP.NET app. You can programmatically dump snapshots when it suits you by using the EQATEC Profiler API. See the user guide or check out this thread.
You can read about this on Microsoft Developer Network.
You can select documentation according to the version of your Visual Studio. You should verify profiling functionality is provided for your Visual Studio type.
How to: Profile a Web Site or Web Application Using the Performance Wizard
Your best bet is to profile your code on your own machine to identify where it is spending time.
Grab a ten day free trial of this:
http://www.jetbrains.com/profiler/
Here are some links to get you going:
Link
http://msdn.microsoft.com/en-us/library/ms178643(v=VS.100).aspx
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx