is it possible to use custom load balancer for aws blue/green codedeploy? - aws-code-deploy

We are using custom load balancer to load balance udp traffic in our system. We were able to create a ASG group for the instances behind load balancer, but we are not able use blue/green codeDeploy to deploy the application. Is there a way we can customize codeDeploy to use the our custom load balancer to perform a blue/green deploy ?

Unfortunately, CodeDeploy blue/green deployments do not support custom load balancer at this moment. It only supports Classic elastic load balancer.

Related

Apollo Server on Ubuntu 18.04 EC2 instance with HTTPS

i'm trying to deploy my simple apollo-server on an Ubuntu 18.04 instance from Amazon Web Services(AWS) EC2. It works fine, but i need/want the traffic to be over HTTPS instead. I was wondering which could be the best option. Im running the code with "forever"("forever start lib/index.js"), also using yarn (to start the project "yarn start"). I'm able to access the server with the ip address () and everything works fine. I would like to do it ASAP, already tried with apollo-server-lambda and others Nodejs hosting websites.
The easier way to do this on AWS is by using a EC2 load balancer. You just need to create an application load balancer and add your instance to the target group. Once you have the load balancer created, you can apply the SSL certificate easily on your load balancer. This approach doesn't require you to change your application code at all.
Please refer this docs.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html
If you don't want to use a load balancer, you need to apply the SSL certificate on the application level. Hope this helps.

ServiceStack Docker architecture

I'm wondering if anyone with bigger brains has tackled this.
I have an application where each customer has a separate webapp in Azure. It is Asp.net MVC with a separate virtual directory that houses ServiceStack. The MVC isn't really used, the app is 99% powered by ServiceStack.
The architecture works fine, but as we get more customers, we have to manage more and more azure webapps. Whilst we can live with this, the world of Containers is upon us and now that ServiceStack supports .net core, I have a utopian view of deploying hundreds of containers, and each request for any of my "Tenants" can go to any Container and be served as needed.
I think I have worked out most of how to refactor all elements, but there's one architectural bit that I can't quite work out.
It's a reasonably common requirement for a customer of ours to "Try" a new feature or version before any other customers as they are helping develop the feature. In a world of lots of Containers on multiple VMs being served by a nginx container (or something else?) on each VM, how can you control the routing of requests to specific versioned containers in a way that doesn't require the nginx container to be redeployed (or any downtime) when the routing needs changing - e.g. can nginx route requests based on config in Redis?
Any advise/pointers much appreciated.
G
Whilst it isn't Azure-specific we've published a step-by-step guide to publishing ServiceStack .NET Core Docker Apps to Amazon EC2 Container Service which includes no-touch nginx virtual host management by running an Instance of jwilder/nginx-proxy Docker App to automatically generate new nginx Virtual Hosts for newly deployed .NET Core Docker Apps.
The jwilder/nginx-proxy isn't AWS-specific and should work for any Docker solution that explains how it works in its introductory blog post.
Using nginx-proxy is a nice vendor-neutral solution for hosting multiple Docker instances behind the same nginx reverse-proxy, but for Scaling your Docker instances you'll want to use the orchestration features in your preferred cloud provider, e.g. in AWS you can scale the number of compute instances you want in your ECS cluster or utilize Auto Scaling where AWS will automatically scale instances based on usage metrics.
Azure's solution for mangaging Docker Instances is Azure Container Service which lets you scale instance count using the Azure acs command-line tool.
Our company is working on the same thing. We were working with kubernetes and building our own reverse proxy with nodejs. This reverse proxy would read customer settings from a certain cache and redirect you to the right environment.
But Depending on the architecture i would advice to just have 2 environments running with both there relative urls: 1 for production and one for the pilot/test environment. Whenever a customer goes to the pilot environment url he will use the same database but just an upgraded version of the WebApp.
Of course this will not work if working with an ORM and database migrations are included. (Which is probably the case when you are using servicestack)

How to create load balancer on New Azure Portal in resource manager mode.

There no "add" button when I browse load balancers, is that means we can only create load balancer in resource manager mode with PowerShell or CLI Only?
A LB in AzureRM requires a Virtual Network.
A simple way to create that, though while not as nice as the overall portal experience, does not require CLI/PowerShell/REST.
https://github.com/Azure/azure-quickstart-templates/tree/master/101-create-internal-loadbalancer
Has a button on the page to "Deploy to Azure".
Besides PowerShell or CLI, you can also create Load Balancer in Resource Manager mode with Azure Resource Manager REST API for Load Balancers CRUD operations.
Refer to this link
Load Balancer ARM REST APIs
Hope this helps!

Windows Server AppFabric Web Farm with NLB

We have the need to setup a highly available load balanced Windows Server. Is there a guide on how to setup a web farm with NLB configured? Our operations team tried to use the Web Farm Framework 2.2 to create the web farm and then configure windows NLB on the machines but we haven’t managed to get it to work. Have anyone done this before? What’s the best practice and the recommended way of doing this?
Cheers,
The MS recommended way of doing this is by using 2 or more Web Farm Framework 'controller' servers running ARR and windows NLB, and then Primary/Secondary servers below that.
There's details on how to set this up here: http://learn.iis.net/page.aspx/511/achieving-high-availability-and-scalability---arr-and-nlb/
You can also use hardware based load balancers, some have specific support, others will work, but won't integrate nicely into the WFF console.
Details on doing this with an F5 Big-IP load balancer are here: http://blogs.iis.net/gursing/archive/2011/01/21/how-to-integrate-f5-with-web-farm-framework.aspx
You can also just use the standard microsoft NLB with WFF and without ARR, but there doesn't seem to be much documentation on how to do this. I've got it working on a 2 group by:
install windows NLB on both servers and create a standard cluster with a shared IP
installing WFF on one server
setting that server as primary but don't tick the 'ready for load balancing' tickbox (this tickbox really means add this server to the ARR load balancing).
Then add the second server and again don't tick the 'ready for load balancing'
You should then have the config sharing/updating benefits of WFF with the load balancing/redundancy of NLB using only 2 servers.

IIS, EC2, Web Farm, Web Deploy and ELB

I'm a developer now developing my startup. I really don't know much about IIS setup. I will host my startup on Amazon EC2. And I want to know how can I scale my application if my traffic increase. I been reading about MS Deploy and Web Farm Framework here: https://serverfault.com/questions/127409/iis-configuration-synchronization-for-web-server-farm . And I want a simple architecture, with not to much configuration. So I been looking an experience with an IIS web farm and Amazon ELBs. And I did not find any one.
So the question is:
It is possible to make a IIS web farm with Amazon ELBs?
Any experience on Ec2? IIS web deploy or WFF and/or without ELBs?
What you recommend for an easy web farm setup?
You can do almost anything you want with IIS on EC2. They are full servers (well window 2k8 datacenter edition) and you can open any ports you need to communicate between servers. Here is an explicit tutorial on how to set up WFF, for example, on EC2.
The question is, are you sure you need to build a web farm? If you simply want to have multiple servers running your code then you can accomplish this without anything more than IIS and the tools that EC2 provides.
You build your app so it uses shared resources (like a session state server, central location for storing user uploaded content), configure a server the way you like it, and capture a server image (AMI). You use this image when you configure AutoScaling to launch new instances based on server metrics (like CPU usage), and they would be automatically added to the load balancer when launched.
The last challenge is ensuring servers launched automatically are running your latest code. You can write a custom program to get the latest code from somewhere (like SVN) on server startup, or you can use something much simpler like Dropbox to handle the synchronization.

Resources