ASP.NET on cloud server - asp.net

My website is running quite well and serving lacks of pages on daily basis. We want to add one more web server to share load on server at heavy traffic times. Instead, can I go with Amazon Elastic Compute Cloud (EC2) or any other cloud service as an alternative solution.
In cloud server environment, need I to install multiple instances as traffic increases or a single instance can scale based on the traffic?

I would suggest Windows Azure
You're right in that you would have multiple instances that scale to meet traffic. With the Azure platform you would design your application into Roles (chunks of functionality), where they make sense to create multiples of, like a page that displays store contents should be highly scalable, whereas the login portion may not. Windows Azure already runs services for Microsoft like Xbox Live and their BPOS offerings, plus there are great tools to develop for the Azure Cloud. You can read more about cloud development at MSDN.

Related

Wordpress site extremely slow after migrating to Azure App Service and Azure Database for MySQL Flexible Sever

I was previously running both my wordpress application and the mysql database server installation inside the same Linux Virtual Machine on Azure. I recently migrated both to Azure App Service and Azure Database for MySQL Flexible Server respectively in the same region - East US. Unfortunately, this has really slowed down the application and page load times have increased to an average of 11 s from 1 s. I served all static files from a CDN but to no avail. Checking the network waterfall, the scripts blocking the page are calls to admin-ajax.php. Increasing the compute of both services to a ridiculous size (there is no traffic right now) only improves the speed to 6 s. Since, both services are in the same region I do not believe there can be such a significant network latency between the server and db. What additional steps can I take to troubleshoot the issue?
If you isolate the slowness endpoints and if its due to SQL then I suggest to configure VNET integration with app service and use service endpoint, Microsoft.SQL at subnet of app service integrated subnet such that some of limitation regarding number of sockets and network latency rule out and should observe performance gain. Parallelly you need to check SQL execution time either using profiling of queries or using Performance recommendations.

Porting C++ server to cloud

I have multiple traditional servers and thousands of users connect to these servers. My server software is written in C++ listening these users on TCP socket and I've defined my own protocol (above TCP). Server code is written such that it is capable of handling client to client communication (for e.g. instant messaging) no matter which client is connected to which server machine. It's typical traditional server farm scenario.
Now when I want to switch this to cloud what changes do I need to do? I am new to cloud and all I know is cloud provider gives us APIs to communicate with cloud instance/DB and we now do not need to worry about actual server instances running behind (load balancing etc it is all taken care by cloud infrastructure).
Can single cloud instance could handle thousands (or say millions) of connections?
My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?
My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?
What you have is an application currently being run on your in house hwardware. With cloud the hardware and OS infrastructure is provided by cloud provider. You need to take your application to cloud and run as-is(almost). If for example, currently you run in your application on CentOS 7, you can create a instance of CentOS 7 in the cloud, and your C++ application should run without issues. Cloud provider "facilitates" with their APIs. It does not enforce application re-write with their APIs. So, there is no need to develope from scratch.
Can single cloud instance could handle thousands (or say millions) of connections?
Depends on the dimensions ( w.r.t to processor, memory, n/w throughput, etc) of the instance that you to use from the cloud provider.

Geo-Replication in Azure App Service

I have a App service hosted in Windows Azure in a region. When there are some issues with Azure servers in the hosted region, the app service goes down and the users are unable to see the website.
I would like to know if there is a way to geo-replicate the app service so that if the servers are down in 1 region, the website should automatically redirect it to a different server?
You can geo-replicate your app service by using Azure Traffic Manager service, which allows you to control the distribution of user traffic to your service endpoints running in different datacenters around the world.
As of today, Azure Traffic Manager provides 3 ways for routing the traffic: Priority, Weighted and Performance. For what you're looking to accomplish, I believe you would want to choose Priority routing method.
To learn more about how you can make use of this service to make your app service highly available, please see this link: https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-geo-distributed-scale/.
This is an old entry but I thought I'd chime in after working with Azure for a few years.
If your statement "When there are some issues with Azure servers in the hosted region" is referring to transient outages, what you might be experiencing is your App Service Plan instance transitioning. Microsoft regularly moves ASP instances to new machines for reasons that make sense to them. Likely this is to load balance hardware or apply patches to the underlying VMs that host app services.
It has been my experience that when the ASP instances are moved, the new ASP instance needs time to warmup the app services hosted on it. If your ASP is configured with only 1 instance, your app service will be unreachable during this time.
If on the other hand, you configure your ASP with a minimum of 2 instances, Microsoft will synchronize the moving of the instances so that at least 1 remains up and available while the other is being moved.
Of course running a multi instance ASP requires your application to either be stateless or built using a session provider other than the default .Net "In Memory" session provider. CosmosDB for instance.

My Azure Website has an odd "HTTP success" pattern in the (Monitor) portal

I have a website hosted in Azure Websites as a Basic tier website.
I'm currently in the development stage, yet the site is live and accessible by the outside world (at least at a basic level), so I wanted to better understand the monitoring features in the Azure management portal.
When I looked at the monitoring tab inside the portal, I see an odd pattern for HTTP success. Looking at the past 60 minutes (which I personally have not been active on), the HTTP successes are very cyclic, with 80 connections, then 0, then 40, then 0, then repeat.
Does anyone have any pointers how I can figure out what the 80 and 40 connections are. I certainly don't have any timed events in my code, so there shouldn't be any calls being made unless a person is actually hitting the site.
UPDATE:
I setup a staging server and blocked all incoming traffic except my own IP. So the same code running, just without access from the outside world. And the HTTP success appears only when I hit the server myself (as expected). This suggests that my site is being hit by an outside bot maybe? Does anyone know how to protect against this? Or at least diagnose if the requests are not legitimate, etc?
I'd say it's this setting that causes the traffic:
Always On. By default, websites are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the site loaded all the time. If your site runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
It's just a keep alive to avoid cold starts every time you or someone else visit your site.
Here's another reference that describes this behavior:
What the always-on feature does is simply ping your site every now and
then, to keep the application pool up and running.
And Scott Gu says:
One of the other useful Web Site features that we are introducing
today is a feature we call “Always On”. When Always On is enabled on a
site, Windows Azure will automatically ping your Web Site regularly to
ensure that the Web Site is always active and in a warm/running state.
This is useful to ensure that a site is always responsive (and that
the app domain or worker process has not paged out due to lack of
external HTTP requests).
About the traffic in general: First of all, the requests could really only come from Microsoft, since any traffic pattern like this will quickly be automatically detected and blocked when using Azure Websites - you cannot set up a keep alive like this yourself. Second, no modern bot whatsoever would regularily ping a specific page with that kind of regularity since it's all to obvious. Any modern datacenter security appliance would catch that kind of traffic and block/ignore/nullroute it.
As for your question regarding protection and security: Microsoft cannot protect your code from yourself. However, everything at the perimeter is managed and handled by Microsoft. That's one of the USP features of Azure - Firewall, Load Balancing, Spoofing, Anti-bot and DDOS protection etc. There will of course always be security concerns regarding any publicly exposed service but you can stay focused on your application while Microsoft manages the rest.
When running Azure Websites, you're in the hands of Microsoft regarding security outside of your application scope. That's a great thing, but if you really like to be able to use other security measures you'll have to set up a virtual machine instead and run your site from there.
You may want to first understand what are these requests. Enable web server logging for the website on Azure Management portal and download IIS logs for your website after seeing this pattern. Then check those to understand the URL, client ip addresses for the requests and user agent field to identify if the requests are really from search bots. Based on the observation, you can either disable some IP statically, use dynamic ip restrictions or configure URLREWRITE to block requests with specific patterns in request or request headers
EDIT
This is how you can block search bots - http://moz.com/ugc/blocking-bots-based-on-useragent
You can configure the URLREWRITE locally on an IIS server in the way described in the above article and then copy the configuration generated in the web.config or connect to the azure website directly using IIS manager as described in http://azure.microsoft.com/blog/2014/02/28/remote-administration-of-windows-azure-websites-using-iis-manager/ and configure urlrewrite rule

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

Resources