I will deploy a SignalR application on an Azure Worker role - Cloud Services. To make it work I found this. I noticed a while (true) with a Thread.Sleep of 10 seconds. Do you think this is the best interval to keep the server alive or I can increase it ?
I think the sole purpose of that Thread.Sleep(10000) there is to avoid flooding the logs with that Trace statement. If you're happy enough to get a log entry saying "Working" every minute or so, I'd say increase it.
Related
I have a busy ASP.NET 5 Core app (thousands of requests per second) that uses SQL Server. Recently we decided to try to switch some hot code paths to async database access and... the app didn't even start. I get this error:
The timeout period elapsed prior to obtaining a connection from the
pool. This may have occurred because all pooled connections were in
use and max pool size was reached.
And I see the number of threads in the thread pool growing to 40... 50... 100...
The code pattern we use is fairly simple:
using (var cn = new SqlConnection(connenctionStrng))
{
cn.Open();
var data = await cn.QueryAsync("SELECT x FROM Stuff WHERE id=#id"); //QueryAsync is from Dapper
}
I made a process dump and all threads are stuck on the cn.Open() line, just sitting there and waiting.
This mostly happens during application "recycles" on IIS, when the app process is restarted and HTTP requests are queued from one process to another. Resulting in tens of thousands requests in the queue, that need to be processed.
Well, yeah,, I get it. I think I know what's happening. async makes the app scale more. And while the database is busy responding to my query, the control is returned to other threads. Which try to open more, and more, and more connections in parallel. The connection pool maxes out. But why the closed connections are not returned to the pool immediately after the work is finished?
Switching from async to "traditional" code fixes the problem immediately.
What are my options?
Increasing max pool size from the default 100? Tried 200, didn't help. Should I try, like, 10000?
Using OpenAsync instead of Open? Didn't help.
I thought I'm running into this problem https://github.com/dotnet/SqlClient/issues/18 but nope, I'm on a newer version of SqlClient and it's said to be fixed. Supposedly.
Not use async with database access at all? Huh...
Do we really have to come up with our own throttling mechanisms when using async like this answer suggests? I'm surprised there's no built-in workaround...
P.S. Taking a closer look at the process dump - I checked the Tasks report and discovered literally tens of thousands blocked tasks in the waiting state. And there's exactly 200 db-querying tasks (which is the size of connection pool) waiting for queries to finish.
Well, after a bit of digging, investigating source codes and tons of reading, it appears that async is not always a good idea for DB calls.
As Stephen Cleary (the god of async who wrote many books about it) has nailed it - and it really clicked with me:
If your backend is a single SQL server database, and every single request
hits that database, then there isn't a benefit from making
your web service asynchronous.
So, yes, async helps you free up some threads, but the first thing these threads do is rush back to the database.
Also this:
The old-style common scenario was client <-> API <-> DB, and in that
architecture there's no need for asynchronous DB access
However if your database is a cluster or a cloud or some other "autoscaling" thing - than yes, async database access makes a lot of sense
Here's also an old archive.org article by RIck Anderson that I found useful: https://web.archive.org/web/20140212064150/http://blogs.msdn.com/b/rickandy/archive/2009/11/14/should-my-database-calls-be-asynchronous.aspx
I am currently using SignalR in and ASP.Net MVC 4 appliation. I am consuming messages from RabbitMQ, and in essence need to broadcast via SignalR whenever a new message hits the queue. Problem is under normal usage there is not a place where a long lived object can live within IIS. I am getting about 1000 messages a second, so the standard approach of pushing the message into IIS by making a request from an outside queue monitoring service/app would pretty much kill my IIS.
I have a general idea of creating a singleton instance on a background thread. Not sure whats the best way to do this in iis, would want the singleton to automatically be recreated if the application dies.
Are you thinking you would have something in a background thread that would check for messages every so often?
I have used quartz.net to create scheduled jobs in the past. They are fairly simple to set up. You can basically just say, execute this job every x interval starting at y time. With whatever solution you go with you will probably need to add in error handling. I think your quartz job would keep trying to execute every x interval even if it threw an exception, but you will need to make sure that you clear out whatever caused the exception in the first place. Otherwise every time it runs it will fail. I.E. like if there is something wrong with your message such that it will error every time you try and broadcast it.
Watch out for IdleTimeout of your application. If IIS puts your web app to sleep your singleton in your background worker/quartz job goes to sleep too. If you set IdleTimeout to 0 your application will never sleep.
If you init your job/worker in Global.asax.cs Application_Start() your job will always start when your web app does.
When you first deploy your app or update it or restart it you will need to make sure your app is running. Not sure if there is a setting for this in IIS. But normally your application doesn't start up until a request is made to it. Good luck. Let me know if you find a solution to this.
Same deal if you app crashes for some other reason. You need something to re-start your web app.
Hope that helps!
My shared hosting provider set up IIS recycle app pool every 3 minutes for idle.
So my session factory often recreates (at application startup). As I have about 70-100 entities it takes about 2-5 seconds to construct factory. So cold start of my application is rather long. I haven't access to IIS setting.
You can offset a lot of the cost of setting up your factory by generating your proxies at build-time instead of runtime. This article explains the steps how.
Being realistic, the simplest change is to ask that the app-pool isn't recycled so frequently (since this is an expensive operation for your application). I'm sure they've set the timeout very low as a "performance" setting, but really this is generating work and slowing things down.
You might not have access to the IIS settings directly, but this shouldn't stop you from contacting your supplier's technical support and getting it resolved.
If you are in a full trust environment (doubtful, but provider may be willing to work with you on this), you can try serializing your configuration so it doesn't need to be rebuilt each time. Merging all your entity mappings into a single XML doc can help also (just do this as build step so its not a nightmare to work with mappings).
More info here: http://nhibernate.info/blog/2009/03/13/an-improvement-on-sessionfactory-initialization.html
Have you tried to stop your site from being idle in the first place? I use uptime robot that is FREE and pings your site every 5 minutes. The benefit of this service is that it only requests the headers of the page you set up as a monitor and therefore does not affect logging such as Google Analytics.
However said you will need to test this to see when your app does indeed recycle to see if uptime robot works with your shared hosting provider. The best way is to log every time the session factory is re-built.
not much you can do. app pool recycle shuts down your app...
I guess you could try to fool the recycler by having the application do something every 2:45.
This question is about limits imposed to me by ASP.NET (like script timeout etc').
I have a service running under ASP.NET and I want to create a counterpart service for monitoring.
The main service's data is located at a database.
I was thinking about having the monitor service query the database in intervals of 1 second, within a loop, issued by an http request done by the remote client.
Now the actual serving of this monitoring will be done by a client http request, which will make the script loop (written in C#) and when new data is detected it'll aggregate that data into that one looping request output buffer, send it, and exit the loop, thus finishing the request.
The client will have to issue a new request in order to keep getting updates.
This is actually exactly like TCP (precisely like Windows IOCP); You request the service for data and wait for it. When it arrives you fire another request.
My actual question is: Have you done it before? How did it go? Am I limited by some (configurable) limits imposed by the IIS/ASP.NET framework? What are my limits in such situation, or, what are better options without complicating things too much?
Note that I do not expect many such monitoring requests at a time, maybe a few dozens.
This means however that 10 such concurrent monitoring requests will keep 10 threads busy, and the question is; Can it hurt IIS/performance? How will IIS handle 10 busy threads? Will it issue more? What are the limits? This is just one example of a limit I can think of.
I think you main concern in this situation would be timeouts, which are pretty much configurable. But I think that it is a wrong solution - you'd be better of with some background service, running constantly/periodically, and writing the monitoring data to some data store and then your monitoring page would just return it upon request.
if you want your page to display something only if the monitorign data is available- implement it with ajax - on page load query monitoring service, then if some monitoring events are available- render them, if not- sleep and query again.
IMO this would be a much better solution than a reallu long running requests.
I think it won't be a very good idea to monitor a service using ASP.NET due to the following reasons...
What happens when your application pool crashes?
What if you decide to do IISReset? Which application will come up first... the main app, or the monitoring app?
What if the monitoring application hangs due to load?
What if the load is already high on the Main Service. Wouldn't monitoring it every 1 sec, increase the load on the Primary Service, as well as IIS?
You get the idea...
I have a aspx web application that updates or adds files in a database. The clients access through the browser and one of the requirements is that they can start the update and be able to close the browser while the update continues. It appears to run for a little bit after I close the browser but then it stops. How can you keep the application running for asp.net?
That's something you could very well solve with WF (Workflow Foundation). Create a workflow for the task that should survive closing the browser. Workflows have their own threads and livecycles separate from ASP.NET.
The web application will keep running in the application pool, but this will be recycled eventually. As long as the users session runs the application should be kept alive, so by upping the session timeout you may fix the problem.
A better approach though would be to move the long-running task into a service instead, but that may require a rewrite of your application.
Usually for long-running or asynchronous processing, you want to dispatch the request to a back-end service to handle. Trying to keep the web-app alive to finish processing can lead to problems, especially with HTTP and session timeouts.
A common pattern for this is to put the request on a message queue and let a back-end service process it when it can.
I would create a separate windows service that you can push jobs onto from your web application, then check the status of the job(s) when the user logs in again.
The windows service won't be tied to the asp.net app domain so it will continue to run regardless of whats happening in your web application.
I've run into this pattern and you have to decouple the work from the HTTP request. The way we've solved it is to abstract the computing to be done as an event to be scheduled. So, say a user at a browser takes an action that requires a long lived (relatively) computation on the back end, this computation is given a name like 'doXYZForUser' and given a prameter vector like (userId, params...) and sent off to the work queue. Some time in the future the user logs in again and can see what the status of their job is.
I'm running a Java stack and a Java Message Service (JMS) but the principle is the same. The request from the browser queues up an event and the browser get an ACK back saying the event is on the work queue. The queue is managed by an entirely separately running process which in .NET I believe is just called the Message Queue. The job comes up on the queue gets processed and the results can be placed in a separate table containing a reference to the user that kicked off the job, so the next time they log in job status/results can be returned.