I'm wondering if anyone with bigger brains has tackled this.
I have an application where each customer has a separate webapp in Azure. It is Asp.net MVC with a separate virtual directory that houses ServiceStack. The MVC isn't really used, the app is 99% powered by ServiceStack.
The architecture works fine, but as we get more customers, we have to manage more and more azure webapps. Whilst we can live with this, the world of Containers is upon us and now that ServiceStack supports .net core, I have a utopian view of deploying hundreds of containers, and each request for any of my "Tenants" can go to any Container and be served as needed.
I think I have worked out most of how to refactor all elements, but there's one architectural bit that I can't quite work out.
It's a reasonably common requirement for a customer of ours to "Try" a new feature or version before any other customers as they are helping develop the feature. In a world of lots of Containers on multiple VMs being served by a nginx container (or something else?) on each VM, how can you control the routing of requests to specific versioned containers in a way that doesn't require the nginx container to be redeployed (or any downtime) when the routing needs changing - e.g. can nginx route requests based on config in Redis?
Any advise/pointers much appreciated.
G
Whilst it isn't Azure-specific we've published a step-by-step guide to publishing ServiceStack .NET Core Docker Apps to Amazon EC2 Container Service which includes no-touch nginx virtual host management by running an Instance of jwilder/nginx-proxy Docker App to automatically generate new nginx Virtual Hosts for newly deployed .NET Core Docker Apps.
The jwilder/nginx-proxy isn't AWS-specific and should work for any Docker solution that explains how it works in its introductory blog post.
Using nginx-proxy is a nice vendor-neutral solution for hosting multiple Docker instances behind the same nginx reverse-proxy, but for Scaling your Docker instances you'll want to use the orchestration features in your preferred cloud provider, e.g. in AWS you can scale the number of compute instances you want in your ECS cluster or utilize Auto Scaling where AWS will automatically scale instances based on usage metrics.
Azure's solution for mangaging Docker Instances is Azure Container Service which lets you scale instance count using the Azure acs command-line tool.
Our company is working on the same thing. We were working with kubernetes and building our own reverse proxy with nodejs. This reverse proxy would read customer settings from a certain cache and redirect you to the right environment.
But Depending on the architecture i would advice to just have 2 environments running with both there relative urls: 1 for production and one for the pilot/test environment. Whenever a customer goes to the pilot environment url he will use the same database but just an upgraded version of the WebApp.
Of course this will not work if working with an ORM and database migrations are included. (Which is probably the case when you are using servicestack)
Related
I am using gitlab ci/cd to deploy my app to google app engine. I already have php instance working properly but when i try build wordpress image using docker-compose, nothing happen.
these are my files:
I have a folder "web" with a files ping.php: https://site-dot-standalone-applications.appspot.com/ping.php
So application is running into /web folder.
wordpress should be deployed into /web folder after:
docker-compose up
UPDATE
Just needed use the following gitlab-ci.yaml:
Unfortunately, you cannot (easily) deploy containers to App Engine Flex this way.
At its simplest, App Engine Flex is a service that combines a load-balancer, an auto-scaler and your docker image. Your image when run as a container is expected to provide an HTTP/S endpoint on port 8080.
There are 2 ways that App Engine could support your deployment but it does neither:
It bundles a WordPress app image and a MySQL image into a single "pod" and exposes WordPress' HTTP port on :8080. This isn't what you want because then each WordPress instance has its own MySQL instance.
It separates the WordPress app into one service and the MySQL app into another service. This is closer to what you want as you could then scale the WordPress instances independently of the MySQL instances. However, databases are the definitive stateful app and you don't want to run these as App Engine services.
The second case suggests some alternative approaches for you to consider:
Deploy your WordPress app to App Engine but use Google Cloud SQL service link.
If you don't want to use Cloud SQL, you could run your MySQL database on Compute Engine link.
You may wish to consider Kubernetes Engine. This would permit both the approaches outlined above and there are tools that help you migrate from docker-compose files to Kubernetes configurations link.
Since you're familiar with App Engine, I recommend you consider using option #1 above (Cloud SQL)
I am quite confused as I haven't seen any blogs or instructions on how to host ASP.NET Core/.NET Core applications with HA and multi-host deployments. All examples are either:
1) One NGINX reverse-proxy, one Kestrel
2) One IIS reverse-proxy, one Kestrel
And both components on same host. In real-life production environments, you have LB maybe service discovery, multiple frontends, multiple backends, etc. But for this case there are no instructions whatsoever. So my questions would be for multi-host environments:
Do I deploy one IIS/NGINX as LB/Reverse-proxy, and redirect requests to Kestrels running on many separate VM:s, i.e. various different IP:s?
Or do I run an NGINX/F5 for load-balancing on one host, then route http traffic to various VM:s that run IIS+Kestrel, or just Kestrel? Is IIS required in this setup as NGINX acts as LB?
If I run IIS or NGINX as reverse-proxy, can they keep alive Kestrels on different VM:s, or does each Kestrel require exactly one IIS/NGINX to keep it alive? I.e. the Kestrel process must be on the same same host as the reverse-proxy?
All answers are very welcome, and thanks a lot in advance! :)
I'm running NGINX at the edge as a load balancer and for SSL Termination and multiple servers with IIS + Kestrel serving MVC. This is working well for us. You may not need it but I've found NGINX to be quite a bit more sophisticated and powerful than anything you could do with IIS. Obviously F5 or something would work as well. Previously I also ran for a while using AWS ELB load balancers which also worked fine, just didn't have much configurability. So depends on your needs.
As was mentioned already, IIS is needed on each box running kestrel to manage the process. You could do this some other way, but using IIS is the easiest.
I have a setup with one VM using (IIS as LB) + several VMs with (IIS + Kestrel). It's working fine for my usage, but I'm curious to see if other people have different suggestions. Then it depends on what you are doing, if you use encryption, machine key needs to be shared between VMs, you might also needs to share session between VMs (https://www.exceptionnotfound.net/finding-and-using-asp-net-session-in-core-1-0/), store things in database ...
Note: because there is no windows hosting that satisfies me at the moment, I'm developing my application in PHP and host them on a linux VPS.
Since Windows Server 2016 supports Docker and you are able to create .net 4.5 images, I thought why not review my applications and hosting plans.
Because I'm not a fan of hosting websites directly on a VPS with IIS (setup and configuration seems clumsy), I thought this "infrastructure" seems ideal for me.
A Windows 2016 VPS
A Linux based VPS
For each asp.net application, create a docker image based on microsoft/iis. This means that for the application, there is nothing left to be configured, right? This application will run on the Windows 2016 server.
On the Linux VPS, I will have nginx configured to have all the configuration for SSL certificates and optimizations. Nginx will have proxies that point to the Windows 2016 VPS on specific ports for the different applications.
I think this architecture has scaling possibilities, less configuration on the Windows VPS, more room for improvement? It should even be possible to do this with Ansible if I'm not wrong.
I only need hosting, nothing related to email, ftp, ... That's why I'm not using shared and/or cloud hosting.
Does this architecture seem fine?
Am I missing something?
Would you still just use a Windows VPS for hosting asp.net applications, even if this architecture is possible?
Does this all seem possible with Ansible? I only have basic experience with it.
I don't see anything wrong in your proposal. Remember you can use ansible inside the Linux image's Dockerfile. Maybe you can find that it is an overkill but it should work.
Probably you will find some problems linking your Linux / Windows containers. But I don't see anything short stopping.
Go ahead and post your results. Also if you encounter some walls just ask here and we will try to help.
Regards
because there is no windows hosting that satisfies me at the moment, I'm developing my application in PHP and host them on a linux VPS.
Would you mind telling us a bit about your requirement of Windows Hosting?
For each asp.net http://asp.net/ application, create a docker image based on microsoft/iis. This means that for the application, there is nothing left to be configured, right?
Once fully functional pre-configured image is prepared, you don't have to perform any other changes to your main image. The main image is only modified when you want to update any application in the image or looking to make any changes or update Windows OS.
Does this architecture seem fine?
NGINX reverse proxy works with IIS backend, so, this proposed architecture is achievable. Initial setup of connecting Linux VPS NGINX web server to individual Windows docker image is slightly complex. If you are successful doing that, the next challenge will be adding subsequent dockers to Windows Hyper-V. Here, I don't see actual purpose of using Docker images to host ASP.Net http://asp.net/ applications, when you can easily deploy pre-installed VMs through Windows HyperVisor.
As far as Ansible is concerned, I don't have much idea about this product, but as seen on their website Ansible can automate the dockers.
We have 3 components that we want to map to our domain like this:
a static website: example.com
an asp.net app used for account settings: example.com/account
some .net webservices used in other apps: example.com/web
We want to use AWS's Elastic Beanstalk to host the entire structure, preferably in the same environment.
And if we were to have all the components in the same Visual Studio project, it would be quite easy. But the static website is prone to a lot of changes and we don't want those changes directly pushed to production to mess with the other 2 components.
The best scenario for us would be to have 3 separate VS projects, each of them publishing separately and independently to the EBS environment:
the static websites VS project should be able to publish in the root
w/o interfering with the /account and /web folders
the ASP.NET user account VS project should be able to publish
directly to /account without messing with the root files
the .NET webservices VS project should be able to publish directly to
/web without messing with the root files
My questions:
Is this possible? And if there is a way, how should we deal with
versioning?
Should we use other AWS services that can help us reach our goal architecture?
Our fallback plan is to host the static website in S3 and use subdomains for the apps: account.example.com and web.example.com.
For your requirements I would suggest using S3 for static website, and Elastic Beanstalk for other 2 components.
Here one more decision you have to take is either to use a single Beanstalk Environment or Multiple. This decision you can take based the requirements for scalability of each service running in Benastalk.
Another important service you can use is AWS Cloudfront, to be used infront of all 3 components, which can be configured to do the url routing for example.com/ and also cache static content in edge locations. This will also help to setup SSL certificates if needed (Which is free for CloudFront). Also if you use Cloudfront you don't need subdomains, which works best for web apps since it don't need CORS with preflight requests.
I'm a developer now developing my startup. I really don't know much about IIS setup. I will host my startup on Amazon EC2. And I want to know how can I scale my application if my traffic increase. I been reading about MS Deploy and Web Farm Framework here: https://serverfault.com/questions/127409/iis-configuration-synchronization-for-web-server-farm . And I want a simple architecture, with not to much configuration. So I been looking an experience with an IIS web farm and Amazon ELBs. And I did not find any one.
So the question is:
It is possible to make a IIS web farm with Amazon ELBs?
Any experience on Ec2? IIS web deploy or WFF and/or without ELBs?
What you recommend for an easy web farm setup?
You can do almost anything you want with IIS on EC2. They are full servers (well window 2k8 datacenter edition) and you can open any ports you need to communicate between servers. Here is an explicit tutorial on how to set up WFF, for example, on EC2.
The question is, are you sure you need to build a web farm? If you simply want to have multiple servers running your code then you can accomplish this without anything more than IIS and the tools that EC2 provides.
You build your app so it uses shared resources (like a session state server, central location for storing user uploaded content), configure a server the way you like it, and capture a server image (AMI). You use this image when you configure AutoScaling to launch new instances based on server metrics (like CPU usage), and they would be automatically added to the load balancer when launched.
The last challenge is ensuring servers launched automatically are running your latest code. You can write a custom program to get the latest code from somewhere (like SVN) on server startup, or you can use something much simpler like Dropbox to handle the synchronization.