Would someone please explain to me why i would need to persist a workflow in a database. Please am just trying to understand the concepts
Workflow are often long running in nature, like weeks or months. And keeping them in memory means you can't recycle the application or machine. By saving the state to disk, ie database, you can restart the process and machine. Also keeping worklflows in memory when they aren't doing anything just wasts memory resources and thus hinders scalability. Finally saving state in a database means we can restart the workflow from that state so it also helps when error handling.
Related
Since we moved the majority of our users to our asp.net web application running as a web application in Azure we've experienced an intermittent issue where the application will crash for all users, responding only with time-outs or 502/503 errors. This is usually occurring after we've made a configuration change (like changing an app setting in the portal) or swapping slots during a deployment. The very frustrating thing is that there appears to be no way to get it back until it eventually sorts itself out. During and immediately before the outage the diagnostics look fine - minimal CPU and memory usage. Lots of errors but they're most timeout errors. This problem is not resolve by scaling out or up, application restarts have no effect. Even killing the w3wp does not bring the app back. CPU profiles taken during outage show failed requests but not much else.
Does anyone know what might be going on here or have any ideas of what we could try?
While running Customer’s .NET processes in Azure Web App environment, it may intermittently crashes due to code or performance issues. It’s important to capture the crash dump when such crash/exception happen automatically for further investigation.
Here is a CrashDiag Site Extension, which can easily help us to capture the necessary data when intermittent unhandled exception happens. To capture dump for exceptions, you could refer to this article.
And as you have said, you only get error message with timeout or 502/503, here is an article you could refer to troubleshoot it.
This issue was actually being caused by our ORM writing to the TraceWriter. A configuration flag had been left on in the production environment due to a change in the deployment process. The TraceWriter is thread safe so was creating locks and blocking the CPU. Busy usage periods would cause requests to the TraceWriter to start queuing, thus leading to a non-responsive application.
I have a session wrapper class in my ASP.NET MVC application that is used to store frequently used data (like current user info, supplier info etc) in Session. Session runs InProc. Everything works perfectly, but I know it is a bad design to have session in InProc mode as it is not scalable and it is tightly coupled to application pool. I didn't want to use Sql Server for managing sessions as it seems to be a last resort as speed is number 1 priority for us. So after doing a bit of research, it looked like Redis DB is the fastest option here (compared to MongoDB, RavenDB etc). I used this provider https://github.com/TheCloudlessSky/Harbour.RedisSessionStateStore. After implementing it as per instructions, it worked. But now I am getting occasional slow down of the site, where pages sometimes (probably 30% of the time) load very very slow. As soon as I switch it back to InProc mode it all runs fine. I wonder if I installed Redis incorrectly or there are some tricks that I have to do to make it run smoothly. Can anyone help? If you require bits of code, I can provide it, but it is pretty much the same as per https://github.com/TheCloudlessSky/Harbour.RedisSessionStateStore sample. CPU and Memory seems to be quite low and stable....
There was an issue with Redis provider Nuget package. Contacted package owner and new version was released which seems to work fine.
I have question about ASP.NET web site and MSSQL database deployment. We are hosting asp.net web site and developed new version, the some asp.net files are changed and database is modified a little. What is the best why to upload new version of web site and upgrade MSSQL database without downtime?
I've managed a large website for the past 5 years with monthly releases and manage to have zero downtime more than 95% of the time. The key, unfortunately, is ensuring that the database is always backwards compatible, but only to the previous release, so you have the opportunity to rollback.
So, if you plan to drop a column, for example, that your application depends on:
Change your application code to not depend on the column, and release that (without removing the column in the database).
In the next release drop the column (as the application no longer relies on it).
It takes some discipline from your dev team, but is surprisingly easy to achieve if you have proper the right environments setup (dev/test/staging/production).
When you release:
Deploy database changes to a staging environment, which is as close to production as possible. Do this preferably in an automated fashion, using something like SQL Compare and SQL Data Compare, so you know the database is completely up to date with your test environment.
Perform "smoke tests" using the old application, but the new database schema, ensuring no major breaking changes have been introduced to the database.
Release your application code.
Smoke test your staging application.
Release to production.
Another thing we do to ensure zero downtime on the website is blue-green deployment. This involves having 2 folders for each website, updating one and switching the IIS home directory once it is up to date. I've blogged about this here: http://davidduffett.net/post/4833657659/blue-green-deployment-to-iis-with-powershell
Not to do it. Point.
ZERO downtime installs´s are VERY hard to do and involve multipel copies of the database, prechecking it in a staging engironment, carefull programming and resyncing the database.
It is pretty much always better to take a small downtime. Sleep long in the night, deploy at 2 in the morning. Or wake up earlier. Identify when it is lease inconvenient for your users.
100% uptime is VERY expensive to imeplement, in terms of the amount of time spent on it. Unless here is a strict business case for it, occasional downtime is a much saner busienss decision.
Even large sites like salesforce.com and ebay.com have scheduled maintenance windows in which at least portions of those sites are unavailable for a certain period of time due to changes to the backends.
For ebay it's every Thursday night and lasts for 4 hours in which "certain features may be slow or unavailable during this time". For salesforce, they schedule and notify users as needed.
Depending on your site, you might be better off scheduling a 1 hour window at some late hour in which your site is at it's lowest traffic level. Notify users ahead of time at 1 week before, 1 day before, and 1 hour before.
Prior to taking it offline make sure you test your deployment from a copy of your current production database on a different server. This will give you an idea of any problems you could run into as well as let you know exactly how long it should take. Double that number when notifying users. Run the tests multiple times to ensure not only the time it's going to take but also to verify data consistency.
Duffman has a good answer with regards to running versions in parallel for a very short window in order to get the updates pushed. However, their is usually a reason for data model changes and it's typically better to transform all of your existing data at the time of deployment. Running this transformation might make certain transactions invalid while it's going on and result in corrupted data.
Having gone through many "hot" production pushes I can say with 100% certainty that neither I nor my clients ever want to deal with those again. There is absolutely no room for error.
I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue.
I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness.
Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?
Every intermittent performance problem I ever had turn out to be caused by something in the database.
You need to check out my blog post Unexplained-SQL-Server-Timeouts-and-Intermittent-Blocking. No, it's not caused by a heavy INSERT or UPDATE process like you would expect.
I would run a database trace for 1/2 a day. Yes, the trace has to be done on production because the problem doesn't usually happen in a low use environment.
Your trace log rows will have a "Duration" column showing how long an event took. You are looking at the long running ones, and the ones before them that might be holding up the long running ones. Once you find the pattern you need to figure out how things are working.
IIS 7.0 has built-in ETW tracing capability. ETW is the fastest and least overhead logging. It is built into Kernel. With respect to IIS it can log every call. The best part of ETW you can include everything in the system and get a holistic picture of the application and the sever. For example you can include , registry, file system, context switching and get call-stacks along with duration.
Here is the basic overview of ETW and specific to IIS and I also have few posts on ETW
I would start by monitoring ASP.NET related performance counters. You could even add your own counters to your application, if you wanted. Also, look to the number of w3wp.exe processes running at the time of the slow down vs normal. Look at their memory usage. Sounds to me like a memory leak that eventually results in a termination of the worker process, which of course fixes the problem, temporarily.
You don't provide specifics of what your application is doing in terms of the resources (database, networking, files) that it is using. In addition to the steps from the other posters, I would take a look at anything that is happening at "out-of-process" such as:
Databases connections
Files opened
Network shares accessed
...basically anything that is not happening in the ASP.NET process.
I would start off with the following list of items:
Turn on ASP.Net Health Monitoring to start getting some metrics & numbers.
Check the memory utilization on the server. Does re-cycling the IIS periodically remove this issue (memory leak??).
ELMAH is a good tool to start looking at the exceptions. Also, go though the logs your application might be generating.
Then, I would look for anti-virus software running at a particular time or some long running processes which might be slowing down the machine etc., a database backup schedule...
HTH.
Of course ultimately I just want to solve the intermittent slowness issues (and I'm not yet sure if I have). But in my initial question I was asking for a rather specific logger.
I never did find an answer for that so I wrote my own stopwatch threshold logging. It's not quite as detailed as my initial idea but it has the benefit of being very easy to apply globally to a web application.
From my experience performance related issues are almost always IO related and is rarely the CPU.
In order to get a gauge on where things are at without writing instrumentation code or installing software is to use Performance Monitor in Windows to see where the time is being spent.
Another quick way to get a sense of where problems might be is to run a small load test locally on your machine while a code profiler (like the one built into VS) is attached to the process to tell you where all the time is going. I usually find a few "quick wins" with that approach.
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.