I'm trying out Windows Azure free trial - as I understand, you can host up to 10 websites on the free account.
My question is: is there any way of hosting a website along with some kind of background processing or scheduled task with the free account on Azure? I'm almost sure that it's not because Web roles support that and not Web sites.
Is there any other alternative to host an ASP.NET MVC website with some kind of background processing on Azure for free? Everything would be purely for educational or personal purposes.
The free sites in Windows Azure Web Sites can technically run some background operations because you can spin up a background thread in the application start up; however, there are a several issues with this approach:
Idle sites will be shut down. This means that if the site isn't seeing a lot of traffic the process can be shut down. I'm not sure that the background processing would keep it alive, or even if it did how reliable that would be kicking off. It will depend heavily on the type of background processing I would think.
The web sites at the free level have CPU and memory quotas. Running something a lot in the background may cause you to hit these quota more often than if the site is more idle. Hitting the quota will shut down the site until a specific time period has past. Be very aware of these quotas if you are using the free or shared levels. If you were planning to have this background processing working a queue for instance this likely won't work out well.
You could use something like the free level of the Scheduler in the Windows Store app to kick off some work by having it call in to your web site and asynchronously kick off the back ground work. This might work and avoid the CPU quota if the amount of background work is pretty small and is completed quickly and with little resources used. Note that there is also a scheduler available with Mobile Services. For educational purposes this may just fine.
Don't forget that with the free trial you get $200 worth of Azure for 30 days. If you are really just trying things out you can easily spin up full cloud services, VMs or even shared web sites during the trial. If you shut them down when not actively working on them the $200 can give you a decent amount of time to try things out.
Related
I am sorry if the question is too vague. I am not supposed to copy any logs or other information from my work place to public. But here is the question:
In my organization, we have a CA APM team to monitor application performance. One of our applications that is idle (No users as of now as the official release is in next couple of months) is showing the w3wp:%Time in GC as 89 which is higher than the set threshold of 80. From developers perspective the code is not executed but the CA APM tells that this is from our app pool and the server is dedicated for our app alone. Can an idle asp.net application cause a problem like this ? Infrastructure team simply pushes it on to developers and developers are clueless because in their perspective their code is not executed. Any advice, insight in to this topic is highly appreciated.
I know that this post is almost month old but still would like to know the reason. My best 2 guesses are:
If there's completely no traffic going to website and GC handles are
still high there might be a scheduled job that's leaking memory.
If not there might be other apps on same machine that are fighting
over memory and system forces gc also on your idle app and since your
app is idle and is gc is being constantly forced the GC% ratio is
high.
I have a web app that will run forever (at least for a few days) on my local machine using the technique (hack?) described in Jeff Atwood's post: https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
However when I run it on App Harbor my app doesn't run for more than an hour or so (I'm not sure when it dies) as long as I hit the site it stays up so I'm assuming it is being killed after an idle period, but I'm not sure why.
My app doesn't save any state or persist anything. It makes web service calls and survives errors in any calls.
I turned on a ping service to keep my app alive but I'm curious why this works on my local machine but not on App Harbor?
The guys behind App Harbor pays for EC2 instances for all running apps, so they naturally want to limit the cpu usage as much as possible. One way to achieve this is to shut down unused applications very fast and only restart them when someone actually try to access them. Paid hosting should not be limited in this way.
(As far as I have been informed they are able to host around 100k sites on less than twenty medium instances which is certainly quite impressive and calls for a very economic use of resources.)
To overcome the limitation you would need a cron job to ping your app harbour site. But this of course a quite recursive problem since you need app harbour to act as a cron job ;)
AppHarbor recycles the Application Pool frequently to keep sleeping websites from using idle CPU time. This is simply the price you pay of using a shared website hosting plan.
If you really want to run a background job then you should be using AppHarbor's background workers, since this is exactly the type of task they were built to run.
http://support.appharbor.com/kb/getting-started/background-workers
Simply build a new console application that runs your logic and include it in your solution. When you push the code the workers will be started automatically. If you happen to already have other exe's in your solution make sure to edit the app.config and set the 'deploy background worker' value to false.
Since this question is from a user's (developer's) perspective I figured it might fit better here than on Server Fault.
I'd like an ASP.NET hosting that meets the following criteria:
The application seemingly runs on a single server (so no need to worry about e.g. session state or even static variables)
There is an option to scale storage, memory, DB size and CPU-power up and down on demand, in an "unlimited" way
I researched but there seems not to be such a platform, that completely abstracts the underlying architecture away and thus has the ease of use of a simple shared hosting but "unlimited" scalability.
"Single server" and "scalability" are mutually exclusive, I'm afraid. But a good load-balancer will apply affinity to requests so you don't need to needlessly double-cache data on multiple servers.
However, well-designed web applications are easy to port to a multiple-server scenario.
I think your best option is something like Windows Azure Websites (separate from Azure Web Workers) which run on a VM you don't have access to. The VM itself provides enough power as-is necessary to run your website, so you don't need to worry about allocating extra CPU power or RAM.
Things like SQL Server are handled separately, but is very cheap to run, and you can drag a slider to give yourself more storage space.
This can be still accomplished by using a cloud host like www.gearhost.com. Apps live in the cloud and by default get 1 node worker so session stickiness is maintained. You can then scale that application larger workers to accomplish what you need, all while maintaining HA and LB. Even further you can add multiple web workers. Each visitor is tied to a particular node to maintain session state even though you might have 10 workers for example. It's an easy and cheap way to scale a site with 100 visitors to many million in just a few clicks.
We have a core windows service hosting around 9 WCF service and acting as a client to another 3 WCF services. We have a front-end website that communicates with this windows service through WCF.
At somepoint, the windows services is executing some heavy operations which results in 100% CPU utilization, usually split 60-40 between the windows service and SQL server.
This is where the WCF connection/requests between the website times out, and this results in a very non responsive UI.
I am looking for a way to make sure any UI-related WCF calls gets executed anyway and takes the highest priority.
Our main problem is that we need to stick with this deployment scenario, where the windows service, the website and SQL server are all running on one machine. We are required to maintain a responsive UI even with a 100% CPU utilization. I am not sure where to start looking for a fix for that ...
It sounds like you should split your service endpoint onto two separate hosts, one for high volume, or process-intensive operations and one for low latency operations. The high-volume endpoint would process from a queue offline, and the low-latency endpoint would handle requests synchronously from the UI.
The kind of problems you are having are typical of when you try to balance the conflicting resource needs of high volume and low latency together in the same process.
If you cannot scale out in this way then I can't really suggest much you can do about it and must apologize for not answering your question directly.
Another thing you could look at is moving everything asynchronous and using a pattern such as CQRS to provide separation between your read and write requirements.
I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.