We have a core windows service hosting around 9 WCF service and acting as a client to another 3 WCF services. We have a front-end website that communicates with this windows service through WCF.
At somepoint, the windows services is executing some heavy operations which results in 100% CPU utilization, usually split 60-40 between the windows service and SQL server.
This is where the WCF connection/requests between the website times out, and this results in a very non responsive UI.
I am looking for a way to make sure any UI-related WCF calls gets executed anyway and takes the highest priority.
Our main problem is that we need to stick with this deployment scenario, where the windows service, the website and SQL server are all running on one machine. We are required to maintain a responsive UI even with a 100% CPU utilization. I am not sure where to start looking for a fix for that ...
It sounds like you should split your service endpoint onto two separate hosts, one for high volume, or process-intensive operations and one for low latency operations. The high-volume endpoint would process from a queue offline, and the low-latency endpoint would handle requests synchronously from the UI.
The kind of problems you are having are typical of when you try to balance the conflicting resource needs of high volume and low latency together in the same process.
If you cannot scale out in this way then I can't really suggest much you can do about it and must apologize for not answering your question directly.
Another thing you could look at is moving everything asynchronous and using a pattern such as CQRS to provide separation between your read and write requirements.
Related
Since this question is from a user's (developer's) perspective I figured it might fit better here than on Server Fault.
I'd like an ASP.NET hosting that meets the following criteria:
The application seemingly runs on a single server (so no need to worry about e.g. session state or even static variables)
There is an option to scale storage, memory, DB size and CPU-power up and down on demand, in an "unlimited" way
I researched but there seems not to be such a platform, that completely abstracts the underlying architecture away and thus has the ease of use of a simple shared hosting but "unlimited" scalability.
"Single server" and "scalability" are mutually exclusive, I'm afraid. But a good load-balancer will apply affinity to requests so you don't need to needlessly double-cache data on multiple servers.
However, well-designed web applications are easy to port to a multiple-server scenario.
I think your best option is something like Windows Azure Websites (separate from Azure Web Workers) which run on a VM you don't have access to. The VM itself provides enough power as-is necessary to run your website, so you don't need to worry about allocating extra CPU power or RAM.
Things like SQL Server are handled separately, but is very cheap to run, and you can drag a slider to give yourself more storage space.
This can be still accomplished by using a cloud host like www.gearhost.com. Apps live in the cloud and by default get 1 node worker so session stickiness is maintained. You can then scale that application larger workers to accomplish what you need, all while maintaining HA and LB. Even further you can add multiple web workers. Each visitor is tied to a particular node to maintain session state even though you might have 10 workers for example. It's an easy and cheap way to scale a site with 100 visitors to many million in just a few clicks.
I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.
I've got an asp.net web page that is making 7 async requests to a WCF service on another server. Both boxes are clean without anything else installed.
I've also increased maxconnections in web.config to 20.
I run a single call through the system and the page returns in 800ms. The long and short of it is I think that the threadpool is being being overwhelmed as, once placed underload I cannot get more that 8 requests per second, even though both quad core boxes are running at 20% CPU load and the sql server it's connected to is returning the querys in under 10ms per call.
I've changed the service behaviour to concurrency.multiple but that's not seeming to help.
Any ideas anyone.
There are many different factors that could be in play here. Taking a stab at the remark that changing your instancing model on the service had zero effect (big IF here) then its possible the 'bottleneck' is upstream from the service. Either at the web server, or the client load generator.
You've got several areas to review for tuning: client, web server, wcf service server - that's assuming there are no network devices in the middle. Pick an end and work towards the other end. Since I'm already making an assumption that its not the service, then I'd start at the client and work my way towards the wcf service.
Client
What machine is driving the load against the web server? A laptop? A desktop? A dedicated test agent, or a shared one? The client acting as the load generator for purposes of this test is also susceptible to maxConnections limitation as this is a client setting.
What is the CPU utilization of the client generating load? Could it be that the test driver is just unable to generate enough load to push these boxes? Can you add additional test clients to your test?
Web Server
What does the system.net/processModel element look like in machine.config on the ASP.NET web server? Try setting autoConfig = true. This will allow the configuration to auto size based on the 'size' of the machine its running on.
WCF Service
Review WCF service for any throttling defaults that might be in play and tweak appropriately. See ServiceThrottlingBehavior on MSDN.
Let us know any changes in behavior you might observe (if any) if you make any changes!
The real answer here that everyone missed is that you're using an ASP.NET web page. That means your client is some form of web browser. Modern web browsers have a limit of 2 concurrent async requests at any time. This means that 5 of your requests were queued up and waiting for the first two to finish. Once those first two, it served the next two, then the next two, then the last one.
All of these round-trips and handshakes simply take time. I'm guessing that your roundtrip time is around 200ms, unfortunately you have to do it 4 times.
I also really dislike the "max 2" browser limitation on making webservice calls.
Is this service hosted in IIS, WAS or a Windows Service?
You should try to set Windows to run services on a higher priority. Your WCF Service is probably creating the threads it needs but they should be running at a low priority.
Hope that helps.
I am building an ASP.NET website which will collect data from a user and submit it to a 3rd party webservice. The webservice is somewhat unreliable and for this reason there is a backup service.
If a call to the primary service fails (timeout or some other error) then I need to flip a bit in a static class which will trip the system to use the secondary service.
At this point, I need to start polling the primary service (with dummy data) to see if it is back up (at which point I will receive an OK code in return). At this point I need to flip the bit back so that the website starts using the primary service again.
I've had a read of this Should I use a Windows Service or an ASP.NET Background Thread? and I think that separating out the code into a Windows Service would be the cleanest method of performing the polling, but then how would I communicate with the web appication.
One thought I've had is to expose a webservice that the Windows Service could use to communicate into the webapp but this seems both messy and over-kill.
I'd appreciate your thoughts and experiences performing similar tasks.
Thanks
I think the Windows service is the way to go, definitely.
As for the communication between the service and your web site, the best answer depends on the size and scale of your solution. If you are building something that needs to be reliable, I'd suggest you implement some sort of queue between your ASP.NET site and your Windows service. You have a lot of options here too, depending on budget and ability: BizTalk, MSMQ, and SQL Server queues (SSIS). Alternatively if you are looking for something smaller scale, I'd recommend you just stick it into a database table somewhere.
I would avoid using files on the file system because you will encounter issues with file locks and multithreading. I would also avoid directly communicating with the service because you risk losing the in-memory queue if the service fails for any reason.
Edited to add:
If reliability isn't a concern here, you could use a WPF named-pipes hosted service for communication between your website and your Windows service. This avoids much of the overheads normally involved in classic Web Services and is surprisingly quick. The only down-side is that self-hosting a WPF service is tricky and can be difficult to keep the service up.
I have two diffrent web services(running on local machine) and pointing to one application pool(1.Can I do that?Is it any performance concern?).I have not much knowledge about how the applicationpool will works.
the other .Net application is using two webservices,but frequently one webservice is not responding which internally calling by ssis package with in the .Net application.
what might be the reason and how to make sure it responds all the time, is there any better way to improve the performance?
if am missing or any further information, Comments Welcome
Yes you can have multiple web applications using to the same application pool.
Is it a performance concern? If it is really high traffic or is faulty code, then perhaps.
Application pools allow pushing sites to different processes, reducing the risk of each affecting the other. If one app pool contains an application/web application that has a memory leak, the leak will only affect that particular process, at least directly. Each process can be recycled either by time or system parameters, which mitigates risks of having something in a bad state.
Performance? Another benefit to app pools is the ability to have multiple instances running simultaneously (a similar thing when putting each app in its own pool). The benefit of this is that more request can be handled at a time. The down-side is that you cannot use in-process session state and your application state will be duplicated for each instance of the process. You would need to consider how much 'stuff' you keep in session and how your caching scheme would be effected, but, it has potential for giving a web application more scalability.
You mention call SSIS... I am assuming that is a long-running service, so you would probably want to push the call to that process to some sort of queue that can process outside of the web service request. MSMQ might work for you. If using a queue as such, you would initiate the running of the code, then have a way of checking on the status of the call to see if it is done.
I agree with Greg Ogle but one more point I think is worth mentioning. Splitting the applications into multiple app pools will also give you an added benefit when it comes to troubleshooting if there are any issues. If you have the various applications split out, you can tell specifically what app pool is related to what w3wp.exe process in the time of need. Like say when that w3wp.exe process is taking 98% of your cpu.