503 Service Temporarily Unavailable with gitlab docker and nginx-proxy docker - nginx

Description:
I've set up the nginx-proxy container which works really great with one of my two docker containers. Which is just a mini go web server on dev.MY_IP_ADDRESS.com.
I've set it up for my gitlab docker container as well which runs on MY_IP_ADDRESS.com:10080 but doesn't seem to work with gitlab.MY_IP_ADDRESS.com
I've done the same configurations as with my web server, by setting by adding an environment variable:
gitlab:
#other configs here
environment:
- VIRTUAL_HOST=gitlab.MY_IP_ADDERSS.com
#more configs here
The only difference is that I set up my go server and nginx-proxy server in the same docker-compose.yml and the gitlab one uses a different docker-compose.yml file. Unsure if this has anything to do with it.
I've attempted to docker-compose up each file in a different orders to see if that was an issue.
Error:
This is what I get when I go on gitlab.MY_IP_ADDRESS.com:
503 Service Temporarily Unavailable
nginx/1.11.8
Question:
Why isn't the reverse proxy for gitlab.MY_IP_ADDERSS.com working for gitlab? Is there a conflict somewhere? It works fine on MY_IP_ADDRESS.com:10080
If any logs are needed or any more information let me know. Thanks.

I completely forgot about this question, I actually found a solution which worked for me:
The problem is that your docker-gen is not able to find your GitLab and therefore does not generate the Nginx configuration for gitlab.MY_IP_ADDERSS.com.
To solve this you have three options:
1.) If you are using the solution with separate containers and launch the docker-gen container with the -only-exposed flag this might prevent it from finding GitLab. This was the issue in my case which is why I am mentioning it.
2.) In your case it will probably be because your GitLab container and your Nginx container do not share a common Docker network. Create one like docker create network nginx-proxy and add all your containers to it.
3.) Another solution proposed in this issue is to add a line network_mode: bridge to your GitLab container. I did not test this myself.

Related

How can I tell if a docker container is on localhost or 192.168.99.100

Is it possible in docker-compose or through some environment variable to figure out the hostname that docker is linking to containers? The wordpress home and siteurl are set to localhost:8000 which work fine on docker for mac, but when used on docker toolbox for windows, the site is hosted on 192.168.99.100:8000, which then redirects back to localhost and fails. Is it possible to determine whether the host is localhost or 192.168.99.100 in docker-compose?
I wound up putting together a simple bash script in the root folder that asked for input on what host the user wanted to use (localhost vs 192.168.99.100) and piped the answer to a .env file (https://docs.docker.com/compose/env-file/#syntax-rules). I then passed that environment variable to the relevant containers in docker-compose.yml. Maybe not the most elegant solution, but it worked in a pinch.

Let's encrypt 502 bad gateway docker

I tried to set an nginx proxy with let's encrypt, all dockerized, by following this tutorial :
http://www.automationlogic.com/using-lets-encrypt-and-docker-for-automatic-ssl/
The problem is that my application exposes port 1337 instead of 80, and I can't change this for now.
Do someone know how I could tell nginx to listen on the app container's at 1337?
After looking at that tutorial and the available source code, the nginx configuration files are using a placeholder _APPLICATION_PORT_ which gets replaced with the nginx docker container's environment variable $APP_PORT_80_TCP_PORT in it's start.sh script. It appears that specific environment variable would need to be added to the docker-compose.yml file:
nginx:
environment:
- APP_PORT_80_TCP_PORT=1337
You would also need to make sure that the docker-compose.yml has the correct port for your application (if docker-compose is launching your application container) so docker exposes the correct port.
Hope that helps

Running Jenkins in a Docker Container

Im trying to get some hands on experience in Jenkins and wanted to run it in a docker container. I was following the tutorial here. I have docker installed on my machine and using Kitematic I launched the official Jenkins docker image (tag: latest) using:
docker run -p 8080:8080 jenkins
However once the container is setup when I go to 192.168.99.100:8080 (192.168.99.100 is my docker-machine ip) it shows the default nginx page. 192.168.99.100:8080/jenkins shows
HTTP ERROR 404
Problem accessing /jenkins. Reason:
Not Found
The weird part is that kitmatic shows a web preview of the running container and shows jenkins up and running fine, but how do I access it via the browser????
EDIT : Just tried docker run -p 8082:8080 jenkins. and it works i.e. I can see the jenkins landing page. Whaaaa.. ?
See if the port 8080 is already taken by another application. it's not allocating this port because it's taken - that is why it can't reach Jenkins. try looking here: https://www.cyberciti.biz/tips/linux-display-open-ports-owner.html

Docker Container to Host Routing

I need a better up-to-date solution the following problem:
Problem: I have to manually create an iptable rule in order to allow a route from a dynamically docker bridge to the host. Otherwise container a cannot connect to container b because there is by default no route from a docker network to the docker host itself.
I have the following setup:
container-nginx (docker)
|
|-container-jira (docker) (https://jira.example.com)
|-container-confluence (docker) (https://confluence.example.com)
In order to have properly functioning Atlassian application links between Jira and Confluence:
Jira accesses Confluence over https://confluence.example.com
Confluence accesses Jira over https://jira.example.com
I use docker-compose for the whole setup and all container are inside the same network. By default this will not work i will get "no route to host" in both containers for hosts confluence.example.com and jira.example.com. Because every container inside the docker network have no route to the docker host itself.
Currently, each time the setup is initialized I manually create an iptable rule from the dynamically created docker bridge with id "br-wejfiweji" to the host.
This is cumbersome, is there "a new way" or "better way" to do this in Docker 1.11.x?
docker-compose version 2 does create a network which allows all containers to see each other. See "Networking in Compose" (since docker 1.10)
If your containers are created with the right hostname, that is jira.example.com and confluence.example.com (see docker-compose.yml hostname directive), nginx can proxy-pass directly to jira.example.com and confluence.example.com.
Those two hostname will resolve to the right IP address within the network created by docker-compose for those 3 (nginx, jira and confluence) containers.
I suggest in the comment to use an alias in order for jira to see confluence as nginx (nginx being aliases to confluence), in order for jira to always use nginx when accessing confluence.
version: '2'
services:
# HTTPS-ReverseProxy
nginx:
image: blacklabelops/nginx
container_name: nginx
networks:
default:
aliases:
- 'crucible.example.com'
- 'confluence.example.com'
- 'crowd.example.com'
- 'bitbucket.example.com'
- 'jira.example.com'
ports:
- '443:443'

Restarting Containers When Using Docker and Nginx proxy_pass

I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.

Resources