Helm with AKS and ARM Templates - azure-resource-manager

I would like to deploy my services in AKS with helm using Azure resource manager templates.
How can I integrate helm with ARM templates?

Helm is a tool that helps you build templates (“charts”) of your application.
They are like ARM templates for your application definition.
I don't think currently we have the option to deploy applications with Helm using Azure Resource manager templates.
I would recommend reading through these documents for ways to deploy application to AKS with Helm:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm
https://microsoft.github.io/PartsUnlimitedMRPmicro/hols/deploy-acs-kubernetes-helm.html

You may want to consider a tool like Hashicorp's Terraform. That will allow you to define infrastructure like you Kubernetes cluster and other resources that you'll require in much the same way that you define them with ARM Templates. It actually will allow you to directly use ARM templates, so you should be able to reuse most of what you've done without rewriting it all into Terraform. You'll also be able to load Helm charts and even do things like provision DB users and so on.

Related

ConfigMap for ASP.NET Core 6 application hosted in Azure App Service or Function App Container hosting

AKS / Kubernetes allows configurable items to be part of ConfigMap files, which can be mounted as a volume into a container.
When that is combined with ASP.NET Core's Options pattern, we can effectively externalize environment specific configurations into separate config file say appsettings.dev.json. A K8 or Helm deploy can install the image as a container in AKS, apply the appsettings.dev.json file as ConfigMap to bring in the envt specific configs.
That way, the release pipeline has precious little to do other than a helm upgrade --install.
App Services and function app container hosting allows the same image to be hosted on the app service / function app. The ability to apply the appsettings.dev.json as a ConfigMap appears to be missing.
The alternative is to read everything from appsettings.dev.json during the release and apply it as configuration overrides in the app service "configuration" section.
Can someone advise if there is a ConfigMap or similar feature available in app service to apply environment config as a file during release instead of a bunch of key value settings?

Multiple Configs under single instance of flyway

Can we maintain different configs under single installation of flyway ? Means if i want to install flyway on central server and perform deployment for multiple services ?
It is possible to maintain different configurations for e.g. dev/test/prod of the same application or different applications under the same installation.
install flyway on central server and perform deployment for multiple services
This is the defacto approach for continuous delivery pipelines.
Achieve this by having application/ environment specific configuration files and use the -configFiles option.
Refer the doc: https://flywaydb.org/documentation/commandline/#config-files

how to migrate an existing website hosted on wordpress to kubernetes on GKE?

I want to migrate an existing website hosted on wordpress to kubernetes using GKE or even gce but i do not know where to start. I haven't written any code yet. I tried to find solutions online but I didn't find anything on migrating a website HOSTED on wordpress to kubernetes.
How can i fetch the database
What should the dockerfile look like
How many yaml files should be included
How many pods do i create
You can create and run with on one pod also but it's depends on your wensite traffic.
You can start with the two pod initially one for Mysql and another for wordpress application it self.
You can create two yaml for the same and one docker and apply it to kubernetes cluster.
Follow this simple guide and start your wordpress on kubernetes :
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

How to setup docker-compose to work with app engine and wordpress?

I am using gitlab ci/cd to deploy my app to google app engine. I already have php instance working properly but when i try build wordpress image using docker-compose, nothing happen.
these are my files:
I have a folder "web" with a files ping.php: https://site-dot-standalone-applications.appspot.com/ping.php
So application is running into /web folder.
wordpress should be deployed into /web folder after:
docker-compose up
UPDATE
Just needed use the following gitlab-ci.yaml:
Unfortunately, you cannot (easily) deploy containers to App Engine Flex this way.
At its simplest, App Engine Flex is a service that combines a load-balancer, an auto-scaler and your docker image. Your image when run as a container is expected to provide an HTTP/S endpoint on port 8080.
There are 2 ways that App Engine could support your deployment but it does neither:
It bundles a WordPress app image and a MySQL image into a single "pod" and exposes WordPress' HTTP port on :8080. This isn't what you want because then each WordPress instance has its own MySQL instance.
It separates the WordPress app into one service and the MySQL app into another service. This is closer to what you want as you could then scale the WordPress instances independently of the MySQL instances. However, databases are the definitive stateful app and you don't want to run these as App Engine services.
The second case suggests some alternative approaches for you to consider:
Deploy your WordPress app to App Engine but use Google Cloud SQL service link.
If you don't want to use Cloud SQL, you could run your MySQL database on Compute Engine link.
You may wish to consider Kubernetes Engine. This would permit both the approaches outlined above and there are tools that help you migrate from docker-compose files to Kubernetes configurations link.
Since you're familiar with App Engine, I recommend you consider using option #1 above (Cloud SQL)

ServiceStack Docker architecture

I'm wondering if anyone with bigger brains has tackled this.
I have an application where each customer has a separate webapp in Azure. It is Asp.net MVC with a separate virtual directory that houses ServiceStack. The MVC isn't really used, the app is 99% powered by ServiceStack.
The architecture works fine, but as we get more customers, we have to manage more and more azure webapps. Whilst we can live with this, the world of Containers is upon us and now that ServiceStack supports .net core, I have a utopian view of deploying hundreds of containers, and each request for any of my "Tenants" can go to any Container and be served as needed.
I think I have worked out most of how to refactor all elements, but there's one architectural bit that I can't quite work out.
It's a reasonably common requirement for a customer of ours to "Try" a new feature or version before any other customers as they are helping develop the feature. In a world of lots of Containers on multiple VMs being served by a nginx container (or something else?) on each VM, how can you control the routing of requests to specific versioned containers in a way that doesn't require the nginx container to be redeployed (or any downtime) when the routing needs changing - e.g. can nginx route requests based on config in Redis?
Any advise/pointers much appreciated.
G
Whilst it isn't Azure-specific we've published a step-by-step guide to publishing ServiceStack .NET Core Docker Apps to Amazon EC2 Container Service which includes no-touch nginx virtual host management by running an Instance of jwilder/nginx-proxy Docker App to automatically generate new nginx Virtual Hosts for newly deployed .NET Core Docker Apps.
The jwilder/nginx-proxy isn't AWS-specific and should work for any Docker solution that explains how it works in its introductory blog post.
Using nginx-proxy is a nice vendor-neutral solution for hosting multiple Docker instances behind the same nginx reverse-proxy, but for Scaling your Docker instances you'll want to use the orchestration features in your preferred cloud provider, e.g. in AWS you can scale the number of compute instances you want in your ECS cluster or utilize Auto Scaling where AWS will automatically scale instances based on usage metrics.
Azure's solution for mangaging Docker Instances is Azure Container Service which lets you scale instance count using the Azure acs command-line tool.
Our company is working on the same thing. We were working with kubernetes and building our own reverse proxy with nodejs. This reverse proxy would read customer settings from a certain cache and redirect you to the right environment.
But Depending on the architecture i would advice to just have 2 environments running with both there relative urls: 1 for production and one for the pilot/test environment. Whenever a customer goes to the pilot environment url he will use the same database but just an upgraded version of the WebApp.
Of course this will not work if working with an ORM and database migrations are included. (Which is probably the case when you are using servicestack)

Resources