How to enable only TLSv1.2 in Apache Airflow - airflow

How to enable TLSv1.2 only in Apache Airflow Service. As per organization security policies require using TLSv1.2 for connections supporting SSL/TLS traffic. I couldn't find anything in the documentation suggesting that it's possible to configure the underlying SSL protocol used by secure connections and Disable the 3DES suite ciphers (SWEET 32 vulnerability). please share the document that would help me?

Set environment variable for Gunicorn (Airflow uses Gunicorn for webserver):
For TLS 1.2:
GUNICORN_CMD_ARGS="--ssl-version=5"
If you want to change ciphers too, you can add them in the above environment variable.
Example:
GUNICORN_CMD_ARGS="--ssl-version=5 --ciphers=TLSv1.2"
Docs: https://docs.gunicorn.org/en/stable/settings.html
We used --ciphers=EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:DHE+RSA+AES in one of our projects.

Setting the GUNICORN variable in the environment file of systemd unit service can also be handy.
example for airflow environment file /opt/airflow/airflow_env
PATH=/opt/airflow/.local/bin:/usr/bin:/usr/local/bin:/bin:$PATH AIRFLOW_HOME=/opt/airflow/airflow
PYTHONPATH=/opt/rh/rh-python36
AIRFLOW_GPL_UNIDECODE=yes
GUNICORN_CMD_ARGS="--ssl-version=5 --ciphers=TLSv1.2"
example for systemd unit service, here we are using airflow-webserver unit service created at /etc/systemd/system/multi-user.target.wants/airflow-webserver.service :-
[Unit]
Description=Airflow webserver daemon
After=network.target mysqld.service redis.service
Wants=mysqld.service redis.service
[Service]
EnvironmentFile=/opt/airflow/airflow_env
User=airflow
Group=airflow
Type=simple
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /run/airflow/
ExecStartPre=/bin/chown -R airflow:airflow /run/airflow/
ExecStart=/opt/airflow/.local/bin/airflow webserver --pid /run/airflow/webserver.pid -l /var/log/airflow
Restart=on-failure
RestartSec=10s
PrivateTmp=true
[Install]
WantedBy=multi-user.target

Related

Configure Docker to use a proxy server

I have installed docker on windows , when I try to run hello-world for testing on docker. I get following error
Unable to find image
My computer is using proxy server for communication. I need to configure that server in the docker. I know proxy server address and port. Where I need to update this setting. I tried using https://docs.docker.com/network/proxy/#set-the-environment-variables-manually.
It is not working.
Try setting the proxy. Right click on the docker icon in system tray, go to settings, proxy and add the below settings:
"HTTPS_PROXY=http://<username>:<password>#<host>:<port>"
If you are looking to set a proxy on Linux, see here
The answer of Alexandre Mélard at question Cannot download Docker images behind a proxy works, here is the simplified version:
Find out the systemd script or init.d script path of the docker service by running:service docker status or systemctl status docker, for example in Ubuntu16.04 it's at /lib/systemd/system/docker.service
Edit the script for example sudo vim /lib/systemd/system/docker.service by adding the following in the [Service] section:
Environment="HTTP_PROXY=http://<proxy_host>:<port>"
Environment="HTTPS_PROXY=http://<proxy_host>:<port>"
Environment="NO_PROXY=<no_proxy_host_or_ip>,<e.g.:172.10.10.10>"
Reload and restart the daemon: sudo systemctl daemon-reload && sudo systemctl restart docker or sudo service docker restart
Verify: docker info | grep -i proxy should show something like:
HTTP Proxy: http://10.10.10.10:3128
HTTPS Proxy: http://10.10.10.10:3128
This adds the proxy for docker pull, which is the problem of the question. If for running or building docker a proxy is needed, either configure ~/.docker/config as the official docs explained, or change Dockerfile so there is proxy inside the container.
I had the same problem on a windows server and solved the problem by setting the environment variable HTTP_PROXY on powershell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
And then restarting docker:
Restart-Service docker
More information at Microsoft official proxy-configuration guide.
Note: The error returned when pulling image, with version 19.03.5, was connection refused.

stunnel - two Ubuntu machines traffic encryption

I have a problem getting Stunnel to work on Ubuntu 18.04. There are tons of websites that tell how to configure it but nothing works with me, I guess I am doing something wrong.
Here are the steps I did:
OS: Ubuntu18.04 (virtual machine, clean install)
sudo apt update
sudo apt upgrade
sudo apt-get install stunnel4
Then enable auto startup by:
sudo nano /etc/default/stunnel4
Switch ENABLE=0 to ENABLE=1
Next step is create a certification file by:
sudo openssl req -new -out config.pem -keyout config.pem -nodes -x509 -days 365
The location of certification file is: /etc/stunnel/
Then create a configuration file, here is a copy for the one I created:
All set, restarting the service is the last step.
sudo /etc/init.d/stunnel4 restart
and here I got the following error :
[....] Restarting stunnel4 (via systemctl): stunnel4.serviceJob for stunnel4.service failed because the control process exited with error code.
See "systemctl status stunnel4.service" and "journalctl -xe" for details.
failed!
(I am looking to encrypt the traffic between two Ubuntu machines)
Thank you in advance.
Install stunnel on both machines i.e server and client
sudo apt-get install stunnel
Once apt-get has finished we will need to enable stunnel by editing the /etc/default/stunnel4 configuration file in both client and server.
Find:
Change to one to enable stunnel automatic startup
ENABLED=0
Replace:
Change to one to enable stunnel automatic startup
ENABLED=1
2 . Install tinyproxy on server --> This is just a proxy server in my case i used custom one.
sudo apt-get install tinyproxy
Configuring tinyproxy
By default TinyProxy starts up listening on all interfaces for a connection to port 8888. Since we don’t want to leave our proxy open to the world, let’s change this by configuring TinyProxy to listen to the localhost interface only. We can do this by modifying the Listen parameter within the /etc/tinyproxy.conf file.
Find:
#Listen 192.168.0.1
Replace With:
Listen 127.0.0.1
Once complete, we will need to restart the TinyProxy service in order for our change to take effect. We can do this using the systemctl command.
server: $ sudo systemctl restart tinyproxy
After systemctl completes, we can validate that our change is in place by checking whether port 8888 is bound correctly using the netstat command.
server: $ netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN
Create certificate using openssl on server
Easier way:
(a). openssl genrsa -out key.pem 2048
(b). openssl req -new -x509 -key key.pem -out cert.pem -days 1095
(c). cat key.pem cert.pem >> /etc/stunnel/stunnel.pem
You can opt to do (c) manually
Also remember to transfer the certificate to the client machine also...so both client and server have /etc/stunnel/stunnel.pem
Stunnel server settings
cert = stunnel.pem
[tinyproxy]
accept = 0.0.0.0:3112
connect = 127.0.0.1:8888
Stunnel Client settings
cert = stunnel.pem
client = yes
[tinyproxy]
accept = 127.0.0.1:3112
connect = 10.0.2.15:3112
Assuming your using virtualbox which has your ubuntu server installed there you have to do the following settings
In Settings>>Network change the adpater to NAT
Then in Settings>>Network>>advanced>>port fowarding add port fowarding
*Name* *Protocol* *Host IP* *Host port* *Guest IP* *Guest port*
stunnel TCP 0.0.0.0 3112 3112
Once your done restart Services
In client
sudo systemctl restart stunnel4.service
In server
sudo systemctl restart stunnel4.service
sudo systemctl restart tinyproxy
To test if it worked
In terminal:
export http_proxy="http://localhost:3112"
export https_proxy="https://localhost:3112
then:
curl --proxy-insecure -v https://www.google.com
Credit:
https://bencane.com/2017/04/15/using-stunnel-and-tinyproxy-to-hide-http-traffic/
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ssl-tunnel-using-stunnel-on-ubuntu

Docker : Unable to run Docker commands

I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.

Docker Nginx disable default exposed port 80

Is there a way to disable the default EXPOSE 80 443 instruction in the nginx docker file without creating my own image?
I'm using Docker Nginx image and trying to expose only port 443 in the following way:
docker run -itd --name=nginx-test --publish=443:443 nginx
But I can see using docker ps -a that the container exposes port 80 as well:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddc0bca08acc nginx "nginx -g 'daemon off" 17 seconds ago Up 16 seconds 80/tcp, 0.0.0.0:443->443/tcp nginx-test
How can I disable it?
The expose instruction is in the docker file which the image is built from.
You need to create your own customized Image for that.
To get the job done:
First locate the dockerfile for the official nginx (library)
Then Edit the dockerfile's expose instruction to 443 only.
Now build your own image modified image using official(customized) dockerfile.
To answer your edited question:
Docker uses iptables, While you could manually update the firewall rules to make the service unavailable at a certain port, you would not be able to unbind the Docker proxy. So port 80 will still be consumed on the docker host and docker proxy.
according to nginx docker image configuration , you can set this before container starts passing an environment var like :
docker run -itd -e NGINX_PORT=443 --name=nginx-test nginx
see :
using environment variables in nginx configuration
then in your nginx you can set :
listen ${NGINX_PORT};
There is a workaround to free the port (but not to unexpose it). I tried avoiding to publish the port but it didn't work and I got errors about the por being already in use anyway. Until I found that the trick is to publish the exposed port but mapped to a different one.
Let me explain with an example.
This will still try to use port 80:
docker up -p 443:443
But this will use 443 and some other random port you pick
docker up -p 443:443 -p<some free port>:80
You can do this in your commands, docker-compose or ansible playbooks to be able to start more than one instance on the same machine. (ie: nginx, which exposes port 80 by default)
I do this from docker-compose and ansible too.

Assigning vhosts to Docker ports

I have a wildcard DNS set up so that all web requests to a custom domain (*.foo) map to the IP address of the Docker host. If I have multiple containers running Apache (or Nginx) instances, each container maps the Apache port (80) to some external inbound port.
What I would like to do is make a request to container-1.foo, which is already mapped to the correct IP address (of the Docker host) via my custom DNS server, but proxy the default port 80 request to the correct Docker external port such that the correct Apache instance from the specified container is able to respond based on the custom domain. Likewise, container-2.foo would proxy to a second container's apache, and so on.
Is there a pre-built solution for this, is my best bet to run an Nginx proxy on the Docker host, or should I write up a node.js proxy with the potential to manage Docker containers (start/stop/reuild via the web), or...? What options do I have that would make using the Docker containers more like a natural event and not something with extraneous ports and container juggling?
This answer might be a bit late, but what you need is an automatic reverse proxy. I have used two solutions for that:
jwilder/nginx-proxy
Traefik
With time, my preference is to use Traefik. Mostly because it is well documented and maintained, and comes with more features (load balancing with different strategies and priorities, healthchecks, circuit breakers, automatic SSL certificates with ACME/Let's Encrypt, ...).
Using jwilder/nginx-proxy
When running a Docker container Jason Wilder's nginx-proxy Docker image, you get a nginx server set up as a reverse proxy for your other containers with no config to maintain.
Just run your other containers with the VIRTUAL_HOST environment variable and nginx-proxy will discover their ip:port and update the nginx config for you.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
# start a first container for http://tutum.test.local
docker run -d -e "VIRTUAL_HOST=tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -e "VIRTUAL_HOST=deis.test.local" deis/helloworld
Using Traefik
When running a Traefik container, you get a reverse proxy server set up which will reconfigure its forwarding rules given docker labels found on your containers.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run --rm -it -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock traefik:1.7 --docker
# start a first container for http://tutum.test.local
docker run -d -l "traefik.frontend.rule=Host:tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -l "traefik.frontend.rule=Host:deis.test.local" deis/helloworld
Here are two possible answers: (1) setup ports directly with Docker and use Nginx/Apache to proxy the vhosts, or (2) use Dokku to manage ports and vhosts for you (which is how I learned to do Method 1).
Method 1a (directly assign ports with docker)
Step 1: Setup nginx.conf or Apache on the host, with the desired port number assignments. This web server, running on the host, will do the vhost proxying. There's nothing special about this with regard to Docker - it is normal vhost hosting. The special part comes next, in Step 2, to make Docker use the correct host port number.
Step 2: Force port number assignments in Docker with "-p" to set Docker's port mappings, and "-e" to set custom environment variables within Docker, as follows:
port=12345 # <-- the vhost port setting used in nginx/apache
IMAGE=myapps/container-1
id=$(docker run -d -p :$port -e PORT=$port $IMAGE)
# -p :$port will establish a mapping of 12345->12345 from outside docker to
# inside of docker.
# Then, the application must observe the PORT environment variable
# to launch itself on that port; This is set by -e PORT=$port.
# Additional goodies:
echo $id # <-- the running id of your container
echo $id > /app/files/CONTAINER # <-- remember Docker id for this instance
docker ps # <-- check that the app is running
docker logs $id # <-- look at the output of the running instance
docker kill $id # <-- to kill the app
Method 1b Hard-coded application port
...if you're application uses a hardcoded port, for example port 5000 (i.e. cannot be configured via PORT environment variable, as in Method 1a), then it can be hardcoded through Docker like this:
publicPort=12345
id=$(docker run -d -p $publicPort:5000 $IMAGE)
# -p $publicPort:5000 will map port 12345 outside of Docker to port 5000 inside
# of Docker. Therefore, nginx/apache must be configured to vhost proxy to 12345,
# and the application within Docker must be listening on 5000.
Method 2 (let Dokku figure out the ports)
At the moment, a pretty good option for managing Docker vhosts is Dokku. An upcoming option may be to use Flynn, but as of right now Flynn is just getting started and not quite ready. Therefore we go with Dokku for now: After following the Dokku install instructions, for a single domain, enable vhosts by creating the "VHOST" file:
echo yourdomain.com > /home/git/VHOST
# in your case: echo foo > /home/git/VHOST
Now, when an app is pushed via SSH to Dokku (see Dokku docs for how to do this), Dokku will look at the VHOST file and for the particular app pushed (let's say you pushed "container-1"), it will generate the following file:
/home/git/container-1/nginx.conf
And it will have the following contents:
upstream container-1 { server 127.0.0.1:49162; }
server {
listen 80;
server_name container-1.yourdomain.com;
location / {
proxy_pass http://container-1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
When the server is rebooted, Dokku will ensure that Docker starts the application with the port mapped to its initially deployed port (49162 here), rather than getting assigned randomly another port. To achieve this deterministic assignment, Dokku saves the initially assigned port into /home/git/container-1/PORT and on the next launch it sets the PORT environment to this value, and also maps Docker's port assignments to be this port on both the host-side and the app-side. This is opposed to the first launch, when Dokku will set PORT=5000 and then figure out whatever random port Dokku maps on the VPS side to 5000 on the app side. It's round about (and might even change in the future), but it works!
The way VHOST works, under the hood, is: upon doing a git push of the app via SSH, Dokku will execute hooks that live in /var/lib/dokku/plugins/nginx-vhosts. These hooks are also located in the Dokku source code here and are responsible for writing the nginx.conf files with the correct vhost settings. If you don't have this directory under /var/lib/dokku, then try running dokku plugins-install.
With docker, you want the internal ips to remain normal (e.g. 80) and figure out how to wire up the random ports.
One way to handle them, is with a reverse proxy like hipache. Point your dns at it, and then you can reconfigure the proxy as your containers come up and down. Take a look at http://txt.fliglio.com/2013/09/protyping-web-stuff-with-docker/ to see how this could work.
If you're looking for something more robust, you may want to take a look at "service discovery." (a look at service discovery with docker: http://txt.fliglio.com/2013/12/service-discovery-with-docker-docker-links-and-beyond/)

Resources