Cant connect to airflow webserver - airflow

I cant connect to the webserver after I init my Airflow. These are steps I did:
pip3 install apache-airflow
mkdir ~/airflow
export AIRFLOW_HOME=~/airflow
airflow initdb
airflow webserver -p 8080
Can anyone tell me why it shows as below?

The error Connection in use: ('0.0.0.0', 8080) means that the port is used by another service, and you cannot run your airflow webserver on the same port, so try to use another port:
airflow webserver -p 8088
Then in your browser try with new url
http://localhost:8088/

Related

How to run Airflow Web Console on different port?

Today, I was trying to run the web console of airflow port other than 8080 like 80, 8090 but every time I was mentioning a different port in airflow.cfg and re-initialize the airflow and run airflow webserver -D
But every time the web console was running at port 8080 can anyone help or encountered this issue?
You need to change the port on airflow.cfg after you save the file, you shall run airflow db init and start airflow webserver again airflow webserver -D
If you are using docker image that will be different. You need change you docker-compose.yaml file

MPI in docker swarm not working when a port is exposed

I am running my code using MPI on a cluster. My code runs as a task in a docker running in swarm mode.
Steps I follow to run my code:
Create a overlay network
Run docker in swarm mode
Start a docker service (replicas = 4) using below command:
docker service create --name mpiser --network mpinet --replicas 4 mpitest:latest
My test code is a simple python script having:
from mpi4py import MPI
import subprocess
import time
comm = MPI.COMM_WORLD
sizeComm = comm.Get_size()
rank = comm.Get_rank()
while True:
print("Rank:",rank,"Hostname:",subprocess.check_output(['hostname']))
time.sleep(2)
I find the ip address of the tasks launched as part of the service
exec into one of the containers
create a "hosts" file with the ip addresses I found
Launch the test code using below command:
mpirun --allow-run-as-root -n 33 --hostfile hosts --mca btl_tcp_if_exclude eth1,lo python3 /home/test.py
This works fine and I can see the prints from all the containers within the swarm.
However, if I expose one of the ports while creating service with below command
docker service create --name mpiser -p 3000:3000 --network mpinet --replicas 4 mpitest:latest
The mpirun command fails with below errors:
------------------------------------------------------------
A process or daemon was unable to complete a TCP connection
to another process:
Local host: 8d3c60280396
Remote host: cc2da25814cc
This is usually caused by a firewall on the remote host. Please
check that any firewall (e.g., iptables) has been disabled and
try again.
------------------------------------------------------------
I tried using --mca btl_tcp_if_include to include only the interface that shows the ip address I added in the hosts file
I tried using --mca btl_tcp_if_exclude to exclued other interfaces that does not have the ip address I added in hosts file
Both these did not help.
Any suggestions on why exposing the port causes communication issue between the containers will be helpful

Configure Docker to use a proxy server

I have installed docker on windows , when I try to run hello-world for testing on docker. I get following error
Unable to find image
My computer is using proxy server for communication. I need to configure that server in the docker. I know proxy server address and port. Where I need to update this setting. I tried using https://docs.docker.com/network/proxy/#set-the-environment-variables-manually.
It is not working.
Try setting the proxy. Right click on the docker icon in system tray, go to settings, proxy and add the below settings:
"HTTPS_PROXY=http://<username>:<password>#<host>:<port>"
If you are looking to set a proxy on Linux, see here
The answer of Alexandre Mélard at question Cannot download Docker images behind a proxy works, here is the simplified version:
Find out the systemd script or init.d script path of the docker service by running:service docker status or systemctl status docker, for example in Ubuntu16.04 it's at /lib/systemd/system/docker.service
Edit the script for example sudo vim /lib/systemd/system/docker.service by adding the following in the [Service] section:
Environment="HTTP_PROXY=http://<proxy_host>:<port>"
Environment="HTTPS_PROXY=http://<proxy_host>:<port>"
Environment="NO_PROXY=<no_proxy_host_or_ip>,<e.g.:172.10.10.10>"
Reload and restart the daemon: sudo systemctl daemon-reload && sudo systemctl restart docker or sudo service docker restart
Verify: docker info | grep -i proxy should show something like:
HTTP Proxy: http://10.10.10.10:3128
HTTPS Proxy: http://10.10.10.10:3128
This adds the proxy for docker pull, which is the problem of the question. If for running or building docker a proxy is needed, either configure ~/.docker/config as the official docs explained, or change Dockerfile so there is proxy inside the container.
I had the same problem on a windows server and solved the problem by setting the environment variable HTTP_PROXY on powershell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
And then restarting docker:
Restart-Service docker
More information at Microsoft official proxy-configuration guide.
Note: The error returned when pulling image, with version 19.03.5, was connection refused.

Docker : Unable to run Docker commands

I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.

docker nginx container not receiving request from outside, connection refused

I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work

Resources