on the means of deploying the Apache Airflow platform - airflow

Could anybody please share experience/views on the means of deploying the Apache Airflow platform (in its simpest standalone mode using sequential executor) on the production server?

I don't think SequentialExecutor mode is the right one for production server. We have one running in a LocalExecutor mode in production, and it works quite well. LocalExecutor mode can do everything that SequentialExecutor does and more.

Related

Deploying a .NET Core + Angular SPA with Docker and GitLab

I have a droplet in Digital Ocean with Ubuntu 20.04 and a .NET Core web application that connects to Mongo.
My deployment workflow is the following:
I work locally with Visual Studio and release my app to a folder.
Then I connect to my server through FTP and drag the content of my folder to /var/www/myapp
Secrets are managed by Azure (it took me a lot of time to set this up).
A service runs the app and restarts it if needed.
The web server is Nginx
Everything works fine, nothing new so far. However, I'd like to automate each deployment and I found GitLab can run a pipeline to help me achieve that. The problem is I don't understand how to set this up correctly since I've seen there are more partes involved, such as Docker and Kubernetes, and I feel a bit overwhelmed.
Do I need to "dockerize" my application, database, etc.? If I want to add Angular as the client side, do I need to dockerize it as well or it goes in the same container as the .NET Core app?
Do I need Kubernetes? If so, why?
What would be the most straightforward and recommended way of achieving a CI/CD for my app?
It took me a lot of effort to deploy to my Linux server and I'm afraid I can destroy something in production.
I would really appreciate any help.

SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:8080

I am windows developer trying to use Camunda model rest API client project.
I did download the sample from the GIT which has examples for Camunda samples using WPF.
https://github.com/mtringuyen/camunda-dot-net-showcase
However there seems to be some sort of socket exception.
IIS is installed. I also verified versioning of .NET STANDARD and .NET FRAMEWORK. They are compatible.
Any suggestions to resolve this error?
Thanks in advance
The project you are using only contains
a .NET tasklist implementation used by people to participate in the business processes.
.NET implementation of automated task workers, which are used to do system integration from .NET (See: https://docs.camunda.org/manual/latest/user-guide/process-engine/external-tasks/)
The Camunda server remains a Java application. However, you can start it and use it as a blackbox from .NET without Java knowledge. There are several distributions. If you are not familiar with Java you should either use the Docker image:
docker run -d --name camunda -p 8080:8080 camunda/camunda-bpm-platform:latest
or a prepackages Tomcat using the Camunda RUN distribution.
You can download a server here: https://camunda.com/download/
Also see: https://docs.camunda.org/manual/latest/installation/
If you think an application that is running is listening on a particular port a quick way to check would be to run a command prompt as administrator and do a netstat -ab . This will show you all the ports that are listening on your local device. Obviously your example requires something listening on 8080 so fire up whatever application that is supposed to be and double check it.

Deploy Airflow 2.0+ on Azure

I have started learning/trying Airflow. Part of my research, I have installed airflow in my local through Docker. I referred official page Airflow Install Docker
I am looking for a (standard) process when I can deploy Airflow to Azure.
Can I directly use same docker-compose file for that ?
Any help will be appreciated.
Likely the easiest way is to use AKS (Azure Kubernetes Service) and use the official Helm Chart of Apache Airflow from the Apache Airflow community to deploy Airflow on it:
https://airflow.apache.org/docs/helm-chart/stable/index.html
I reckon, Installing Airflow through Docker-compose.yml on Azure wouldn't be the right approach. It is probably encouraged for local installations only.
The way we have set this up in Azure is like this:
Database: Azure Database for PostgreSQL server
Webserver + Scheduler: Azure App Service on a Linux App Service Plan
Docker: Azure Container Registry
Build the Docker image (either locally or in CICD pipeline)
Push the Docker image to Azure Container Registry
Setup Postgres server and create airflow database
Setup Azure App Service to pull Docker image from ACR (You can also setup continuous integration). Remember to configure Environment variables.
Obviously, as Jarek mentioned, you could also go with setting up AKS.
You can try the preconfigured images of Kubernertes AirFlow2 packaged by Bitnami. And perform the Deploy in AKS
bitnami.airflow

How do I run Meteor shell in Heroku?

I've deployed my Meteor app to Heroku using the https://github.com/jordansissel/heroku-buildpack-meteor buildpack. My Meteor is v1.0+.
How do I access a server console to my app? Normally on my local dev machine I would run $ meteor shell.
Similarly, how can I meteor reset?
Thanks
If you used the oortcloud meteor buildpack, or a fork of it, the build uses a production mode build of meteor. meteor shell is a development tool and not available for use in production mode.
This is a tradeoff. You could theoretically use use a development mode instance in production but you would have terrible performance. Meteor in development struggles to cope with > 10 users. In production mode the figure is much larger.
meteor reset on the other hand clears the database of the development mode database. To clear up your database log into your database using mongo and drop all the collections. Alternatively run use db.dropDatabase(); (in mongo)

capifony multistage deployment and database configuration per stage

I found excellent tool called capifony which is very very useful while developing symfony2 application and I want to work with multistage option. All works fine, but I have one problem with database configuration for other stages.
Suppose we have:
3 environments: development, production, staging
3 servers: local - development, my.site.com - production and staging.my.site.com - staging
How to setup capifony for this standard multistage example?
When we call:
cap production deploy:migrations
or
cap staging deploy:migrations
the capifony use the
--env=prod
for all symfony's console commands. There is a problem, because we call:
app/console doctrine:migrations:migrate --env=prod
for staging server, but the database configuration used for it is "prod" so we run schema update in production settings... How to solve it?
Solution to your original question is simple, use set :symfony_env_prod, "staging" in deploy.rb
However, you seem to be mixing servers with environments.
Each server should be treated as a complete package and support any environment (i.e. dev server with production environment), including having separate databases.
Furthermore, staging server setup should be as close to production as possible (that is whole point of the staging server), so it should be run on production environment.

Resources