Airflow can't Decrypt connections from a 'follower' server - encryption

I have installed Airflow in multiple machines using CeleryExecuter: the main machine runs the scheduler, web-server and one worker and two follower machines run two celeried workers.
Everything works fine but Celery worker in a follower Airflow machine can't decrypt connection that was created in the main Airflow machine.
I have used the same ariflow.cfg in every machine, hence the same farnet_key and the same database and everything.
Still the follower workers cant decrypt the connection that was created in the main machine, has anybody had the same issue before? any solutions?
Thanks for any help.

Related

Connectivity issue between two Windows systems on the same network

I am trying to set up a license server for a CAE software.
Server side everything is working fine, logs are clean and the application successfully reaches the licensing server locally.
I have fixed by configuration the listening ports and opened them on the firewall, so I moved to the next step which was setting up the clients.
The first one worked fine and successfully connects to the licensing server, while the second returned error.
I tried to run a Test-NetConnection to those ports with Powershell from both machines and as expected it worked for the first one while it didn't for the second. Ping succeeds but TCP connection fails.
What could be the issue?
Thanks everyone for your time and support.

Dotnet EF Core builds the project and hangs

I'm trying to scaffold a single table from an existing database. After 'Build Succeeded' the process hangs without showing any further progress. I can see the process running, holding at ~ 12% CPU. I'm running Debian 10.
The server is running on a VirtualBox VM. I confirmed the database is accessible using DBeaver. I'm not sure how to troubleshoot this, because I'm not getting any kind of error message.
I updated login auditing on the SQL server to collect both failed and successful logins. When I connect from something like DBeaver, I see successful logins for the database account. I don't see any logs recorded when I run the dotnet ef scaffold command.
I started Wireshark, and I do see traffic between my computer and the VM on port 1433. There is a TDS7 pre-login message, a bunch of acknowledgements, and then keep-alive requests and acknowledgements. So I know network connectivity is not an issue. It's like the dotnet ef command is just sitting and waiting for the database to respond.
I completed the same steps on a Mac laptop, and was able to scaffold the models without any issues.

Cannot open new SSH connections after a certain amount of time

I have a web server running Alpine linux and OpenSSH. When I power on the server, for about an hour or two I am able to open SSH connections and send commands fine. However, after that, even though the server is up, it does not respond to pings and I cannot SSH in to it. The server is still running, and I can still access the website being served from it. Why does this happen, and how can I avoid it?

Run Airflow Webserver and Scheduler on different servers

I was wondering if the Airflow's scheduler and webserver Daemons could be launched on different server instances ?
And if it's possible, why not use serverless architecture for the flask web server ?
There is a lot of resources about multi nodes cluster for workers but I found nothing about splitting scheduler and webserver.
Has anyone already done this ? And what may be the difficulties I will be facing ?
I would say the minimum requirement would be that both instance should have
Read(-write) access to the same AIRFLOW_HOME directory (for accessing DAG scripts and the shared config file)
Access to the same database backend (for accessing shared metadata)
Exactly the same Airflow version (to prevent any potential incompatibilities)
Then just try it out and report back (I am really curious ;) ).

Airflow and Docker Containers

I am running airflow in containers AWS ECS, 1 scheduler, 2 web servers, and multiple celery workers.
From what I have seen the only thing that is affected when running them in containers is that the web servers are unable to access the workers' port on 8793 to retrieve logs from the workers.
Is that that only thing that is affected when running these in containers?
Yes because you can only map one port from docker container to host instance. I use a similar setup and logs are the only main issue I have faced. There are different ways to solve this though. You use other logging services on the container which push the logs to Cloudwatch or FLuentdb etc..

Resources