I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
Related
I'm trying to follow a tutorial on creating an Apache Airflow pipeline on a GCP vm instance (https://towardsdatascience.com/10-minutes-to-building-a-machine-learning-pipeline-with-apache-airflow-53cd09268977) but after building and running the docker container, I get this "502 Bad Gateway" error with Nginx 1.14 when try to access the webserver using:
http://<VM external ip>/
I'm quite new to using GCP and can't figure out how to fix this.
Some online research has suggested editing NGINX configuration files to:
keepalive_timeout 650;
keepalive_requests 10000;
But this hasn't changed anything.
The GCP instance is a N1-standard-8 with Ubuntu 18.04, and Cloud, HTTPS and HTTP access enabled.
The Nginx sites enabled are :
server {
listen 80;
location / {
proxy_pass http://0.0.0.0:8080/;
}
}
Root Cause:
The issue the you experience has nothing to do with keepalives, it is rather simpler - the docker container exits out and isn't running, so when nginx tries to proxy your request into the container, it fails and thus the error. Said failure is due to the incompatibility of airflow with current versions of sqlalchemy.
Verification:
run this command to see the logs of the failed container
sudo docker logs `sudo docker ps -a -f "ancestor=greenr-airflow" --format '{{.ID}}'`
and you will see that the python inside the container fails to import a package with the following error:
No module named 'sqlalchemy.ext.declarative.clsregistry'
Solution:
While I followed the tutorial to the letter, I'd recommend against
running commands with sudo you may want to deviate from the tutorial a
wee bit in order not to.
before running
sudo docker build -t greenr-airflow:latest .
command, edit the Dockerfile file and add the following two lines
&& pip install SQLAlchemy==1.3.23 \
&& pip install Flask-SQLAlchemy==2.4.4 \
somewhere up in the list of packages that are being installed, I've added it after
&& pip install -U pip setuptools wheel \
which is line 54 at the time of writing.
If you would like to re-use the same instance, delete and rebuild the images after making changes to the file:
sudo docker rmi greenr-airflow
sudo docker build -t greenr-airflow:latest .
I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash
I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>
I'm running an Ubuntu 14.04 instance that has docker installed on openstack. I'm trying to mount a volume into a docker container. I'm doing this with
docker run -t -i -v /mnt/data/dir:/mnt/test ubuntu
Where /mnt/data/dir is an NFS shared directory. Doing this gets me:
docker:
Error response from daemon: Container command '/bin/bash' could not be invoked..
However, using a local directory instead of a mounted directory works exactly as expected.
I understand that docker doesn't natively support an NFS mounted file system, however the errors I googled are usually not of the form that I've mentioned above.
Any clue on how to proceed
Edit: I forgot to mention that its not just limited to /bin/bash could not be invoked. I tried running a tomcat server and that gave me the exact same error.
I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here.
I have the following setting:
linux box with ubuntu server 14.04 and IP 192.168.1.104
DNS: git.mydomain.com = 192.168.1.104
A gitlab docker that I start, according to the official doc, this way:
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image
or
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image
the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080
I access git.mydomain.com without any issue, everything works.
I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config
Host git.mydomain.com
User git
Port 2222
IdentityFile /home/user/.ssh/id_rsa
If I try
ssh -p 2222 git#git.mydomain.com
the connection is closed. I assume it is because only a git-shell is permitted.
But, if I try
mkdir test
cd test
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin git#git.domain.com:user/test.git
git push -u origin master
it stucks with
Connection closed by 192.168.1.104
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I also tried with
git remote add origin git#git.domain.com:2222/user/
and the result was the same.
Note that the logs of gitlab docker include
[2015-03-06T11:04:43+00:00] INFO: group[git] created
[2015-03-06T11:04:43+00:00] INFO: user[git] created
[2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created
[2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created
Any idea how I can fix this issue?
Thanks in advance for your help.
I would guess that you have authentication problem.
Here are few things you can try:
Make sure you added you public key in gitlab.
Check permissions of you id_rsa file.
Try temporarily disabling hosts verification with
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you.
But when I log-in with ssh I get:
ssh -p 2222 git#gitlab
PTY allocation request failed on channel 0
Welcome to GitLab, tomzo!
Connection to gitlab closed.
Git remotes do not need port 2222 configured. This is OK (works for me):
$ git remote -v
origin git#gitlab:lab/workflow.git (fetch)
origin git#gitlab:lab/workflow.git (push)
And I can push and pull with git.
$ git push
Everything up-to-date