Porting C++ server to cloud - tcp

I have multiple traditional servers and thousands of users connect to these servers. My server software is written in C++ listening these users on TCP socket and I've defined my own protocol (above TCP). Server code is written such that it is capable of handling client to client communication (for e.g. instant messaging) no matter which client is connected to which server machine. It's typical traditional server farm scenario.
Now when I want to switch this to cloud what changes do I need to do? I am new to cloud and all I know is cloud provider gives us APIs to communicate with cloud instance/DB and we now do not need to worry about actual server instances running behind (load balancing etc it is all taken care by cloud infrastructure).
Can single cloud instance could handle thousands (or say millions) of connections?
My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?

My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?
What you have is an application currently being run on your in house hwardware. With cloud the hardware and OS infrastructure is provided by cloud provider. You need to take your application to cloud and run as-is(almost). If for example, currently you run in your application on CentOS 7, you can create a instance of CentOS 7 in the cloud, and your C++ application should run without issues. Cloud provider "facilitates" with their APIs. It does not enforce application re-write with their APIs. So, there is no need to develope from scratch.
Can single cloud instance could handle thousands (or say millions) of connections?
Depends on the dimensions ( w.r.t to processor, memory, n/w throughput, etc) of the instance that you to use from the cloud provider.

Related

Would I benefit using Cloud run instead of Cloud Functions? Where does it fit in GCP?

I use Cloud Functions for most of my backend requirements. What additional benefit does Cloud Run provide to an existing Cloud functions user? Both are managed, has autoscaling, handles HTTP and are run in GCP.
Where would Cloud Run fit in Google Cloud Platform?
References : GCP explained - Medium
Cloud Functions server instances handle requests in serial, and this is not configurable. Cloud Run instances handle requests in parallel, and the level of parallelism per instance is configurable. This can potentially save you money, if you understand how best to configure a server instance, given the performance characteristics of the code you deploy.
Cloud Functions requires you to choose from among provided language and runtime configurations that are not configurable. Cloud Run lets you run any type of backend configuration you want, assuming that it simply exposes an HTTP endpoint on port 8080.
Cloud Functions provides those selected language and runtime configurations without requiring that you do anything other than deploy code that targets one of those configurations. Cloud Run requires that you supply a docker configuration that establishes the runtime environment (which is more work).
Cloud Functions lets you establish triggers on a wide variety of events that can come from a variety of Cloud and Firebase products. Cloud Run (currently) can be triggered via HTTP requests, PubSub push and a narrow selection of Cloud products (such as Cloud Scheduler and Cloud Tasks).
Cloud Functions requires that you only run your code within the managed provided environments. Cloud Run allows you to take your docker configuration and run it anywhere docker is supported, including GKE, where you gain more control over the server instances.
Google Cloud Run fits into your Serverless layer but as a container. The container infrastructure is managed for you.
Cloud Functions are limited in respect to the libraries, languages, and runtimes supported.
Cloud Run removes those limitations. You can use any language, combination of libraries and runtime that supports running within a container.
One limitation is that there is only one internal port $PORT which defaults to 8080 today. Externally both HTTP and HTTPS are supported. Both HTTP and HTTPS map to $PORT.
One big plus is that Cloud Run supports custom DNS names and custom SSL certificates. You can host your website on Cloud Run. As an experiment, I set up WordPress and Cloud SQL on Cloud Run and assigned it a DNS domain name with an SSL certificate.

What is more suitable: A windows service or WCF service?

I am creating a web app. I want to create a listening service (TCP) that listens continuously and updates web page according to that.
A Windows service or a WCF service?
At the end I just want a background service that listens on a socket continuously and update data in database. and when database is updated I will use signal r to show that in my page.
Right now I am trying with WCF but I am wondering if it can be done with Windows service also. And right now this application will work on LAN. But in the future, it can also be in the cloud.
First of all, it is important to understand that a Windows service and a WCF service are not the same.
A Windows service is a specialized executable that runs in the background on Windows.
A WCF service is a specialized piece of code that exposes some functionality through a well-defined endpoint. It does not run on its own, but instead must be hosted by some parent process, like IIS, a desktop application, or even a Windows service.
In thinking about the problem you've described, I suppose the most fundamental question to ask is whether or not you have control over the data that will be received via the TCP connection. WCF is built on the notion of the ABCs (Address, Binding, and Contract), all of which have to match in order to facilitate data exchange between WCF endpoints. For example, if you wish to expose a WCF endpoint via IIS that accepts TCP connections from some remote WCF endpoint, the remote WCF endpoint needs to send data to your IIS-hosted WCF endpoint using the agreed-upon data contract. Absent that, WCF will not work. So, if you cannot define the data contract to be used between WCF endpoints, then you'll need to find another option. An option that will work is to open a TCP listener within a Windows service, process the data as it is received, update your database, and listen for more data.
================================================
By way of example, I work on a project that has a front-end desktop application that communicates with a back-end Windows service. We build both the application and the Windows service, so we have full control over the data exchange between the two processes. At one point in time, we used WCF as the mechanism for data exchange. The Windows service would host a WCF service that exposed a NetNamedPipeBinding, which we later on changed to NetTcpBinding to get around some system administration issues. The application would then create its own endpoint to communicate with the WCF service being hosted within the Windows service.
This worked fine.
As our system got more mature, we needed to start sending more and more information from the Windows service to the application. If I recall correctly, I believe we experimented with streaming within WCF and concluded that the overhead was not something we could tolerate. So, we used WCF to exchange commands and status information between the application and the Windows service, but we simultaneously used a TCP socket connection to stream the data from the Windows service to the application.
This worked fine.
When we got a chance to update the Windows service software, we decided that it would be better to have a single communication mechanism between the Windows service and the application. So, we replaced WCF altogether with a TCP socket connection that uses a homegrown messaging protocol to exchange information in both directions - application to Windows service and Windows service to application.
This works fine and is the approach we've used for a couple of years now.
HTH

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

ASP.NET on cloud server

My website is running quite well and serving lacks of pages on daily basis. We want to add one more web server to share load on server at heavy traffic times. Instead, can I go with Amazon Elastic Compute Cloud (EC2) or any other cloud service as an alternative solution.
In cloud server environment, need I to install multiple instances as traffic increases or a single instance can scale based on the traffic?
I would suggest Windows Azure
You're right in that you would have multiple instances that scale to meet traffic. With the Azure platform you would design your application into Roles (chunks of functionality), where they make sense to create multiples of, like a page that displays store contents should be highly scalable, whereas the login portion may not. Windows Azure already runs services for Microsoft like Xbox Live and their BPOS offerings, plus there are great tools to develop for the Azure Cloud. You can read more about cloud development at MSDN.

Resources