I'm porting a cpu-heavy .net 4.0 windows application to a .net 4.0 wcf service. Basically I just imported the .net classes to the wcf service.
All is working well except for performance at the wcf service - a task that takes 6267947 ticks (2539ms) uses 815349861 ticks (13045ms) on the aspx.net wcf service running locally on the same develop machine.
I allready have uploaded the service + a test client to appharbor where the performance is as bad as on my local machine - the link to my test app is: http://www.wsolver.com/. Any ideas on how I can improve performance?
Check any dependencies on your service that may be constructed at Request Time. These Include constructor dependencies and field/property dependencies. Maybe one of them is causing the delay? If this is the case consider using a singleton to instantiate the long running class.
Have you confirmed that subsequent requests still cause the delay?
Also create a band new service that does something simple like Datetime.Now.toString() and see if it has the same problem.
Please take a look at the articles and whitepapers below. I think they should give you enough concrete performance considerations to explore, and likely some very practical settings to tweak, optimize, or change.
Performance Tuning WCF Services
Optimizing WCF Web Service Performance
Using ServiceThrottlingBehavior to Control WCF Service Performance
Transport Quotas
Optimizing IIS Performance
ASP.NET Performance Overview
A Performance Comparison of Windows Communication Foundation (WCF) with Existing Distributed Communication Technologies
If you need to do time-consuming initialization of a complex datastructure, you should to that once in Application_Start() and assign the generated datastructure to a static variable on the MvcApplication object. Doing it just once on application start is going to be much faster that doing it in each request.
I would take a full memory dump during the 13 seconds (or several using procdump) and then acutally look at what is occurring in the process (windbg and sos.dll). Then, you can narrow down which code is the culprit.
I take it that the dictionary tree is only loaded once, into cache? You're not loading it on every call are you?
Related
We are referencing a 3rd party proprietary CLI DLL in our .net project. This DLL is only an interface to their proprietary C++ library. Our project is an asp.net (MVC4/Web API) web application.
The C++ unmanaged library is rather unstable. Sometimes it crashes with e.g. dangling pointers. We have no way of solving it, and using this library is a first-class customer requirement.
When the application crashes, the application pool in IIS doesn't respond anymore. We have to restart it, and doing so takes a couple minutes (yes, that long!).
We would like to keep this unstable DLL from crashing our application. What's the best way of doing it? Can we keep the CLI DLL in a separate AppDomain? How?
Thanks in advance.
I think every answer to this question will be some kind of work around.
My workaround would be to not interact directly with the DLL from your web application.
Instead write your requests from the web application to either a Message Queue or a SQL table. You can then have another application such as a Windows Service which reads the requests, interacts with the DLL and then writes the results back for your web application to read.
I'm not saying that SQL / Message Queues are the right way, I'm more thinking of the general process flow.
I had this exact problem with a third party library that accessed protected memory for purposes of interacting with a hardware copy protection dongle. It worked fine in a console or winforms app, but crashed like crazy when called from an IIS application.
We tried several different things, some of which are mentioned in other answers on this page. But ultimately, the best solution for us was to us a very old technology - .Net Remoting. I know - it's somewhat frowned on these days. But it fit this particular need quite well.
The unstable code was placed in a Windows Service application. The web application made remoting calls to this service, which relayed the commands to the third-party library.
Now I'm sure you could do the same thing with WCF, sockets, etc. But remoting was quick and easy to setup, and since we only talk to the same server it works without opening any ports. It just talks on a named pipe.
It does mean a second service to install besides the web application, but that was acceptable in my particular use case.
If you did something similar, and the third-party code actually crashed the service, you could probably write some code in your main application to bring it back up.
So perhaps a process boundary is more useful than an App Domain when you have unstable code to wrangle.
I would first increase the IIS process recyling rate, maybe the the DLL code fails after a certain number of calls, or after the process reaches a certain amount of memory usage.
You can find information on the configuration of IIS 7.0 recycling options here: http://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx
In your case I would recycle the process at a specific time, when you know there is less load on the application. And after a certain number of requests (lower than the default) to try and have "fresh" process most of the time.
The recycling process is graceful in the sense that the the old process is not terminated until the one that will replace it is ready, so there should be no noticeable downtime.
More information about the recycling mechanism here: http://technet.microsoft.com/en-us/library/cc745955.aspx
If the above does not solve the problem I would wrap the calls in my own code that manages the unstable DLL execution.
This code should recover from the failures for example by repeating the failing calls until a result is obtained and failing with a graceful error if it is not possible after a number of attempts.
Internally the calls to the unstable DLL could be made in a spawned thread or even the code could be in an new external executable that you could launch with Process.Start.
This last option has more overhead but it might be your only option. See this SO question for more information on this: How do you handle a thread that has a hung call?
I suggest following solution.
Wrap this dll with another web application. Can be one of the following ones. Since you already use web api, it is most suitable for you.
Simple ASMX Web Service
WCF Service
Asp.Net MVC - WEB Api Service
Control your p-invoke code so that you do not have any bug? See following articles.
The Black Art of P/Invoke and Marshaling in .NET
P/Invoke Revisited
Publish this application to IIS with different application pool.
Use standard techniques suggested before like. I suggest configure recycling IIS for both memory and scheduled times.
IIS process recycling rate
How to limit the memory used by an application in IIS?
The question in short is that we are stumbling upon BDD definitions that more or less require different states - which leads to the necessity for a mock of sorts for ASP.NET/MVC - I know of none, which is why I ask here
Details:
We are developing a project in ASP.NET (MVC3/Razor engine) and are using SpecFlow to drive our development.
We quite often stumble into situations where we need the webpage under test to perform in a certain manner so that we can verify the behavior, i.e:
Scenario: Should render alternatively when backend system is down
Given that the backend system is down
And there are no channels for the page to display
When I inspect the webpage under test
Then the page renderes an alternative html indicating that there is a problem
For a unit test, this is less of an issue - run mock on the controller bit, and verify that it delivers the correct results, however, for a SpecFlow test, this is more or less requiring alternate configurations.
So it is possible at all, or - are there some known software patterns for developing webpages using BDD that I've missed?
Even when using SpecFlow, you can still use a mocking framework. What I would do is use the [BeforeScenario] attribute to set up the mocks for the test e.g.
[BeforeScenario]
public void BeforeShouldRenderAlternatively()
{
// Do mock setups.
}
This SO question might come in handy for you also.
You could use Deleporter
Deleporter is a little .NET library that teleports arbitrary delegates into an ASP.NET application in some other process (e.g., hosted in IIS) and runs them there.
It lets you delve into a remote ASP.NET application’s internals without any special cooperation from the remote app, and then you can do any of the following:
Cross-process mocking, by combining it with any mocking tool. For example, you could inject a temporary mock database or simulate the passing of time (e.g., if your integration tests want to specify what happens after 30 days or whatever)
Test different configurations, by writing to static properties in the remote ASP.NET appdomain or using the ConfigurationManager API to edit its entries.
Run teardown or cleanup logic such as flushing caches. For example, recently I needed to restore a SQL database to a known state after each test in the suite. The trouble was that ASP.NET connection pool was still holding open connections on the old database, causing connection errors. I resolved this easily by using Deleporter to issue a SqlConnection.ClearAllPools() command in the remote appdomain – the ASP.NET app under test didn’t need to know anything about it.
I have started to work on this .NET web application where it has an IOC container (Windsor) to create business managers, and to keep them in the memory until the IIS recycles them. Basically these business managers are having their own states, and data of which the content is modified from background threads that are fired at the Application_Start . This is not the way I was expecting an web application to work ( which are supposed to be stateless and per thread for per request) and I'm not quite sure if this implementation is sustainable/scalable. Has anybody tried the things in this manner if so what are the consequences/pros that you see in this?
We use statics in the application, only for the core features. Static classes are shared across all the requests, so usability should be somewhat low. In the development world, we're seeing statics pop up more and more: ASP.NET MVC 3 utilizes them for various areas of the application, as well as other popular OS source libraries.
As long as there aren't a lot of them, you should be OK... but you can always verify with a memory profiler too see how big they are getting, and whether they are sucking up too much memory.
The other alternative could be to place them in cache, or rebuild them and store them in each request. To store them globally in a request, use HttpContext.Current.Items collection.
I have a web service running and I consume it from my desk application that is written on Compact Framework.
It takes 13 seconds to retrieve 8 results which is kinda slow. I also expect to be retrieving more results in the future. The database query runs fast.
Two questions: how do I detect where the speed slow down occurs? Do I put timers in the Web services code?
I would like to detect whether it is the network or the application code.
This is my first exposure to web services in a real environment so please bear with me.
i used asp.net 2.0 and c# to write a simple web service.
Another good profiler is the EQATEC Profiler. I did a write up on it here: http://elegantcode.com/2009/07/02/eqatec-profiler-and-net-cf-profiling-and-regular-net/
And it works find for .net CF projects. But this will allow you to see if there performance issues in unexpected places.
Your already on the right track of adding event logging, and include timers in them. Note, doing so will add to the over all time it takes, so you'll want to remove them after you track down the culprit. Also look into running the same webservice call multiple-times without re-initiating the connection, that may be cause as well.
-Jay
A starting point is to profile your web service to see where the delay is comming from
Did you know the CLR Profiler? There are some tools you can use to see what is happening
http://msdn.microsoft.com/en-us/library/ms998579.aspx
The database connectivity from your service to the DB could be a possible cause for slowdown. Adding timers should do the trick. If the code isnt too huge, you can look at the coding constructs to come up with an informed decision of where exactly things can be slow. Then add the timers. You would get a fair idea of where things are slowing down.
Two biggest pain points are going to be instantiating the web service reference and transferring all the data over the network. Pending anything turning up where some obvious blunder was made, I would look at ways of reducing the size of your xml and ways of better handling your web service reference.
All I know about the compact framework is that it is a pain to work in. I've worked on a number of web projects though and profiling your server, putting in logging to record the time taken will be helpful. If all the time is being taking post server response, however, it won't do much more than prove your server is working quickly.
SoapUI is a fantastic java application for consuming web services. It has a lot of functionality, including time metrics. I would start with that and see how long it takes to consume the same thing your client would be. Failing issues there, start with what I recommended above.
I have a C# web service which currently communicates with a Flex app using XML. It's not streaming data or anything, but still I'd like to lower the overhead involved. I have two questions:
1) would I see any benefit from using a technology like FluorineFX or WebORB in terms of reducing load on the server? The Flex clients won't perceive much difference, I imagine.
2) how easy is it to retrofit a technology like this into an existing product? Is it easier when you start from scratch?
Thanks in advance.
As far as server load, it's very tough to say. I can say definitely that the performance difference in the client are significant. For large data sets, we've seen a 10x performance increase in the client by using AMF instead of XML. The Flash Player can deserialize the AMF much faster than XML and this is important since you don't know how much horsepower the client machine will have.
Pretty easy. The programming model for Fluorine isn't one where you code against their explicit API; you just configure Fluorine to expose certain .NET services. Essentially any plain old class can have its methods exposed remotely. so your migration from web services to Fluorine FX should be easy.
It's very easy to expose your existing services to a Flex client using WebORB. What you would do is simply drop your services into WebORB's bin folder making them viewable through WebORB's service browser. You can then select and invoke the methods for testing purposes and then auto generate the integration code for deployment into your FlashBuilder project. This creates the integration between your client application and services on the server-side.
In terms of performance, there is a considerable performance improvement using remoting as opposed to web services. We have a free benchmark tool that lets you test the difference yourself in your own environment. Performance gains are more noticeable for larger data sets. Here is a link to that benchmark tool:
http://www.themidnightcoders.com/products/weborb-for-net/developer-den/technical-articles/amf-vs-webservices.html