It seems that my app, Evograph shuts down automatically when no clients are present on the site. However, it restarts shortly after someone visits. For example, I routinely notice the it's "down," and upon checking back a few minutes later, everything is functional.
What may be causing this? My site requires sustained hosting - how may I fix it?
Thanks!
This is the normal behaviour on sites hosted by the meteor deploy service. As the service is free at the moment there is no guaranteed SLA (see https://groups.google.com/forum/#!msg/meteor-talk/HqDvR1sF3-4/YEqrXpDqVGcJ). They typically shut down after a few hours of inactivity, then start right back up after a web request is given.
If your app takes a while to start up, it'll say its down because it misses the timeout it should be up by. Try removing stuff from your apps startup or making the startup more efficient if you don't mind it being killed in the background but having it up as soon as someone visits (without it showing its down)
There isn't much you can do about this. You could wait for meteor to release their commercial solution or use your own hosting provider such as Amazon's EC2 to run your meteor site.
Related
This may very well be a question that is too broad to answer but any ideas would be incredibly beneficial. I have a web site where load times are incredibly slow in one environment but not the other. In general, the time to first byte is around 15 seconds on most pages. It takes this long on every page within the entire application and not only on first load. I have been troubleshooting the issue for several days now and feel completely lost as to the actual cause for the latency.
Now for a long explanation about the issue.
The environment is a Frankenstein monster of different sources where too many people have had their hands in it, from what I can gather. I have carefully taken the time to compare each of the two environments and haven't identified a key difference. There are numerous things at play here, but I can summarize the main components.
It is a .NET web application built using Orchard CMS running within IIS and has a SQL Server backend. A dedicated server hosts the database and the another dedicated server hosts the web application itself, which is pretty standard. The main difference between the environments is the production site is running in Liquid Web and the new development site is running in AWS. Basically, the site will ultimately be migrated to AWS once the latency issues are resolved.
AWS has more than enough resources. In fact, production (Liquid Web) has been running into issues as of late due to the CPU usage being nearly maxed out. There are many more resources in AWS, and neither of the servers appear to be using more than 1% or 2% of their available resources. I verified this.
If the issue is within the database, I'm not really sure where else to look. I used SQL Server Profiler on the database server to analyze traffic and no transactions were taking more than a half second, aside from the Audit Logins/Outs (which from my research is normal behavior). The main database queries execute almost immediately after trying to navigate to a page within the site, not 15 seconds later when the page loads.
I had a thought that the network traffic in AWS application server and the database server could be bottlenecked somewhere. However, resolving the application locally does not improve performance. I thought it could have been an issue with the routing within the domain, such as the way in which DNS is set up, but that does not seem to be the case either... or perhaps it is, and I just haven't figured out the best way to troubleshoot that. Either way, resolving the application on localhost does not improve performance. The page still hangs for 15-20 seconds.
The vRAM usage for the site's application pool and the default app pool certainly does seem on the high end, if that makes a difference.
I have browsed the IIS logs and cannot find anything obvious. Granted, I don't have much experience in IIS and could be missing something. Windows Event logs show me nothing out of the ordinary either. There are some errors in both Liquid Web and AWS in regards to printer drivers not being installed, but those have nothing to do with the application itself.
I am unsure of how to check if it has something to do with the Orchard CMS. Granted, this is just a package/framework that was migrated over into the dev server, directly along with the application itself. I see nothing that would have changed within the environment.
The fact is that the two environments seem identical, yet one is running very slowly based on some factor that I just can't seem to identify.
Thank you!
Since we moved the majority of our users to our asp.net web application running as a web application in Azure we've experienced an intermittent issue where the application will crash for all users, responding only with time-outs or 502/503 errors. This is usually occurring after we've made a configuration change (like changing an app setting in the portal) or swapping slots during a deployment. The very frustrating thing is that there appears to be no way to get it back until it eventually sorts itself out. During and immediately before the outage the diagnostics look fine - minimal CPU and memory usage. Lots of errors but they're most timeout errors. This problem is not resolve by scaling out or up, application restarts have no effect. Even killing the w3wp does not bring the app back. CPU profiles taken during outage show failed requests but not much else.
Does anyone know what might be going on here or have any ideas of what we could try?
While running Customer’s .NET processes in Azure Web App environment, it may intermittently crashes due to code or performance issues. It’s important to capture the crash dump when such crash/exception happen automatically for further investigation.
Here is a CrashDiag Site Extension, which can easily help us to capture the necessary data when intermittent unhandled exception happens. To capture dump for exceptions, you could refer to this article.
And as you have said, you only get error message with timeout or 502/503, here is an article you could refer to troubleshoot it.
This issue was actually being caused by our ORM writing to the TraceWriter. A configuration flag had been left on in the production environment due to a change in the deployment process. The TraceWriter is thread safe so was creating locks and blocking the CPU. Busy usage periods would cause requests to the TraceWriter to start queuing, thus leading to a non-responsive application.
I've just moved my Wordpress website on OpenShift PAAS ecosystem, on a scalable PHP cartridge. But I immediately noticed the website is really slow to respond - around 3000/4000 milliseconds. BUT, when it starts to respond, the page loading/rendering is absolutely fast.
here's the url: http://gabrielebaldassarre.com
just to give you a comparison, this static website is hosted on the same AWS Region: http://extras.gabrielebaldassarre.com/tos5-4
For that reason, I pointed this bottleneck to the nameservers I use (from Cloudflare, because of naked CNAME needs), but using a online tester, they seems ok.
I wouldn't say that my Wordpress is a vanilla config, but it's not a mammoth, after all. And loading time after response starting is ok.
I'm wondering if there is something wrong with HAProxy, or my OpenShift configuration, but I don't know how to check or what to do about.
any idea?
Openshift suspends and serializes apps without much activity after a given period, and the first time they 'wake' they deserialize and this takes time.
Since your a free user, I'm assuming that your application is deployed on small gears. Depending on the size of your application, that may not be enough. Try signing up for the bronze plan and see if your application performance improves on a medium or large gear.
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.
I want to add a scheduled task to a client's ASP.NET app. These posts cover the idea well:
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
What is the Best Practice to Kick-off Maintenance Process on ASP.NET
"Out of Band" Processing Techiniques for asp.net applications
My question has two parts: First, will IIS unload the application if there isn't enough request activity despite the Cache activity? My client doesn't enjoy as much traffic as stackoverflow so they can't rely on user requests to keep the app 'active'. Obviously, I can't schedule tasks in an unloaded app.
Second, if so, is there a way to prevent IIS from unloading the app outside of configuration or external 'stay-alive' requests? My client's host doesn't allow much configuration tweaking and a stay-alive utility introduces the deployment complexity I'm trying to avoid with an ASP.NET Cache solution.
Thanks a bunch.
Edit/Conclusion: TheXenocide's solution is exactly correct given the question. However, I've decided it is a really bad question. The temptation to cut corners is always looming. I've regained my senses and told my client to use a website monitoring tool to keep the site active. In addition, the scheduled task is going in a windows service despite the extra deployment hassle.
Unfortunately, outside the range of changing timeout configuration (which I believe to be possible in Web.config, though I don't know what is and isn't allowed on hosting providers, most of which use Medium Trust) I don't believe there is any other method to keep the application from ending beyond web requests. One thing you might try that may be a little more simple than using some keep-alive service on a local machine might be to add some logic to Session_Start/Session_End that ensures there is always at least one session active; you can use the WebRequest class from within your application to call your own site and it should still start a new session.
Good luck, and let us know what you do :)
UPDATE: these details now very much depend on which version of IIS and which version of .NET you're running in. Newer versions of each have methods of configuring "always running" applications.