Dynamic proxy_pass in nginx to another pod in Kubernetes - nginx

I'm trying to create an nginx proxy that forwards requests to /<service> to http://<service>. I first tried the following:
location ~ ^/(.+)$ {
set $backend "http://$1:80";
proxy_pass $backend;
}
But it fails saying something like (when calling /myservice):
[error] 7741#0: *1 no resolver defined to resolve http://myservice
Since myservice is not externally accessible I've tried to install go-dnsmasq as a sidecar in the same pod and I try to use it for DNS resolution (like I've seen in this example) and change my nginx config to look like this:
location ~ ^/(.+)$ {
resolver 127.0.0.1:53;
set $backend "http://$1:80";
proxy_pass $backend;
}
But now nginx fails with:
[error] 9#9: *734 myservice could not be resolved (2: Server failure), client: 127.0.0.1, server: nginx-proxy, request: "GET /myservice HTTP/1.1", host: "localhost:8080"
127.0.0.1 - xxx [30/May/2016:10:34:23 +0000] "GET /myservice HTTP/1.1" 502 173 "-" "curl/7.38.0" "-"
My Kubernetes pod looks like this:
spec:
containers:
- name: nginx
image: "nginx:1.10.0"
ports:
- containerPort: 8080
name: "external"
protocol: "TCP"
- name: dnsmasq
image: "janeczku/go-dnsmasq:release-1.0.5"
args:
- --listen
- "0.0.0.0:53"
Running netstat -ntlp in the dnsmasq container gives me:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN -
tcp 0 0 :::53 :::* LISTEN 1/go-dnsmasq
And running nmap --min-parallelism 100 -sT -sU localhost in the nginx container:
Starting Nmap 6.47 ( http://nmap.org ) at 2016-05-30 10:33 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 1997 closed ports
PORT STATE SERVICE
53/tcp open domain
8080/tcp open http-proxy
53/udp open domain
So it seems that dnsmasq and nginx are indeed up and running? What could I be doing wrong?

After much research and trial and error I managed to solve this. First I changed the pod specification to:
spec:
containers:
- name: nginx
image: "nginx:1.10.0"
ports:
- containerPort: 8080
name: "external"
protocol: "TCP"
- name: dnsmasq
image: "janeczku/go-dnsmasq:release-1.0.5"
args:
- --listen
- "127.0.0.1:53"
- --default-resolver
- --append-search-domains
- --hostsfile=/etc/hosts
- --verbose
then I also had to disable the ipv6 for the resolver in nginx:
location ~ ^/(.+)$ {
resolver 127.0.0.1:53 ipv6=off;
set $backend "http://$1:80";
proxy_pass $backend;
}
Then it works as expected!

I resolved this by coredns docker :
my nginx and coredns are all deploy on host
step1: config Corefile
in Corefile maybe you should change k8s master config refer: https://coredns.io/plugins/kubernetes/
sudo mkdir /etc/coredns; sudo tee /etc/coredns/Corefile <<-'EOF' .:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
endpoint http://172.31.88.71:8080
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance } EOF
step2:config docker and then start it
tee coreos.sh <<-'EOF'
docker run --restart=always -idt --name coredns \
-v /etc/coredns/Corefile:/etc/coredns/Corefile \
-v /home/ec2-user/.kube/config:/etc/coredns/kubeconfig \
-p 53:53/udp \
coredns/coredns:1.6.9 \
-conf /etc/coredns/Corefile
EOF
step3: config nginx and then reload
resolver 127.0.0.1 valid=60s ipv6=off;

Related

Kubernetes Ingress running behind nginx reverse proxy

I have installed minikube on a server which I can access from the internet.
I have created a kubernetes service which is available:
>kubectl get service myservice
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice 10.0.0.246 <nodes> 80:31988/TCP 14h
The IP address of minikube is:
>minikube ip
192.168.42.135
I would like the URL http://myservice.myhost.com (i.e. port 80) to map to the service in minikube.
I have nginx running on the host (totally unrelated to kubernetes). I can set up a virtual host, mapping the URL to 192.168.42.135:31988 (the node port) and it works fine.
I would like to use an ingress. I've added and enabled ingress. But I am unsure of:
a) what the yaml file should contain
b) how incoming traffic on port 80, from the browser, gets redirected to the ingress and minikube.
c) do I still need to use nginx as a reverse proxy?
d) if so, what address is the ingress-nginx running on (so that I can map traffic to it)?
Setup
First of all, you need a nginx ingress controller.
The nginx instance(s) will listen on host 80 and 443 port, and redirect every HTTP request to services which ingress configuration defined, like this.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-ingress
annotations:
# by default the controller redirects (301) HTTP to HTTPS,
# the following would make it disabled.
# ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
Use https://{host-ip}/ to visit myservice, The host should be the one where nginx controller is running at.
Outside
Normally you don't need another nginx outside kubernetes cluster.
While Minikube is a little different, It is running kubernetes in a virtual machine instead of host.
We need do some port-forwards like host:80 => minikube:80, Running a reverse proxy (like nginx) in the host is an elegant way.
It can also be done by setting virtual networking port forward in Virtualbox.
As stated by #silverfox, you need an ingress controller. You can enable the ingress controller in minikube like this:
minikube addons enable ingress
Minikube runs on IP 192.168.42.135, according to minikube ip. And after enabling the ingress addon it listens to port 80 too. But that means a reverse proxy like nginx is required on the host, to proxy calls to port 80 through to minikube.
After enabling ingress on minikube, I created an ingress file (myservice-ingress.yaml):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myservice.myhost.com
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
Note that this is different to the answer given by #silverfox because it must contain the "host" which should match.
Using this file, I created the ingress:
kubectl create -f myservice-ingress.yaml
Finally, I added a virtual host to nginx (running outside of minikube) to proxy traffic from outside into minikube:
server {
listen 80;
server_name myservice.myhost.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.42.135;
}
}
The Host header must be passed through because the ingress uses it to match the service. If it is not passed through, minikube cannot match the request to the service.
Remember to restart nginx after adding the virtual host above.
use iptables forward host's port to minikube ip's port
sudo echo “1” > /proc/sys/net/ipv4/ip_forward
sudo vim /etc/sysctl.conf
change net.ipv4.ip_forward = 1
# enable iptables's NAT:
sudo /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# forword host's port 30000-32767 to minikube ip's port 30000-32767
sudo iptables -t nat -I PREROUTING -p tcp -d <host ip> --dport 30000:32767 -j DNAT --to <minikube ip>:30000-32767

Unable to load balance using Docker, Consul and nginx

What I want to achive is load balancing using this stack: Docker, Docker Compose, Registrator, Consul, Consul Template, NGINX and, finally, a tiny service that prints out "Hello world" in browser. So, at this moment I have a docker-compose.yml file. It looks like so:
version: '2'
services:
accent:
build:
context: ./accent
image: accent
container_name: accent
restart: always
ports:
- 80
consul:
image: gliderlabs/consul-server:latest
container_name: consul
hostname: ${MYHOST}
restart: always
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:53/udp
command: -advertise ${MYHOST} -data-dir /tmp/consul -bootstrap -client 0.0.0.0
registrator:
image: gliderlabs/registrator:latest
container_name: registrator
hostname: ${MYHOST}
network_mode: host
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: -ip ${MYHOST} consul://${MYHOST}:8500
nginx:
container_name: nginx
image: nginx:latest
restart: always
volumes:
- /etc/nginx
ports:
- 8181:80
consul-template:
container_name: consul-template
build:
context: ./consul-template
network_mode: host
restart: always
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: -consul=${MYHOST}:8500 -wait=5s -template="/etc/ctmpl/nginx.ctmpl:/etc/nginx/nginx.conf:docker kill -s HUP nginx"
The first service - accent - is that my web service that I need to load balance. When I run this command:
$ docker-compose up
I see that all services start to run and I see no error messages. It looks as if everything is just perfect. When I run
$ docker ps
I see this in the console:
... NAMES STATUS PORTS
consul-template Up 45 seconds
consul Up 56 seconds 0.0.0.0:8300->8300/tcp, 0.0.0.0:8400->8400/tcp, 8301-8302/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, 8600/tcp, 8600/udp, 0.0.0.0:8600->53/udp
nginx Up 41 seconds 0.0.0.0:8181->80/tcp
registrator Up 56 seconds
accent Up 56 seconds 0.0.0.0:32792->80/tcp
Please, pay attention to the last row and especially to PORTS column. As you can see, this service publishes 32792 port. To check that my web service is achievable I go to 127.0.0.1:32972 on my host machine (the machine where I run docker compose up) and see this in browser:
Hello World
This is exactly what I wanted to see. However, it is not what I finally want. Please, have a look at the output of docker ps command and you will see, that my nginx service published 8181 port. So, my expectation is that when I go to this address - 127.0.0.1:8181 - I will see exactly the same "Hello world" page. However, it is not. In browser I see Bad Gateway error message and in nginx logs I see this error message
nginx | 2017/01/18 06:16:45 [error] 5#5: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:32792/index.php", host: "127.0.0.1:8181"
It is really interesting, because nginx does what I expect it to do - upstreams to "http://127.0.0.1:32792/index.php". But I'm not sure why does it fail. By the way, this is how nginx.conf (created automatically with Consul Template) looks like:
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
upstream app_servers {
server 127.0.0.1:32792;
}
server {
listen 80;
root /code;
index index.php index.html;
location / {
try_files $uri/ $uri/ /index.php;
}
location ~ \.php$ {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location ~ /\.ht {
deny all;
}
}
}
I wouldn't change anything, since this nginx.conf looks good to me. Trying to understand why it does not work, I shelled to nginx container and made a couple of commands:
$ curl accent
Hello World
$ curl 127.0.0.1:32972
curl: (7) Failed to connect to 127.0.0.1 port 32972: Connection refused
$ curl accent:32972
curl: (7) Failed to connect to accent port 32972: Connection refused
Again, it is interesting, because nginx container sees my web service under port 80 and not under its published 32972 port. Anyway, at this stage I do not know why it does not work and how to fix it. I just have a guess, that it is somehow connected to the way, how network is configured in docker-compose.yml. I tried various combinations of network_mode: host on accent and nginx service, but to no avail - either accent stops working or nginx or both. So, I need some help.
When you do port binding it publish some port from container (80 in accent e.g.) and some port on your host (random 32792 on host e.g.).Containers in same network as your accent container can access your container port 80 by accent (same as accent:80) due to docker-compose services name resolving. You can access accent:80 from your host with accent:32792. When you are requesting 127.0.0.1:32792 from your nginx container you can access only nginx container 32792 port, not accent. accent:32792 is not correct url from anyway (80 port open on accent, 32792 on host). But 127.0.0.1:32792 should work when you add nginx container to host network. But I noticed that you use incorrect port in curl call. Your accent:80 published to host 32792 but you request 32972.

nginx docker container: 502 bad gateway response

I've a service listening to 8080 port. This one is not a container.
Then, I've created a nginx container using official image:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d -p 443:443 -p 80:80 nginx
After all:
# netstat -tupln | grep 443
tcp6 0 0 :::443 :::* LISTEN 3482/docker-proxy
# netstat -tupln | grep 80
tcp6 0 0 :::80 :::* LISTEN 3489/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 1009/java
Nginx configuration is:
upstream eighty {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name eighty.domain.com;
location / {
proxy_pass http://eighty;
}
}
I've checked I'm able to connect with with this server with # curl http://127.0.0.1:8080
<html><head><meta http-equiv='refresh'
content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body
style='background-color:white; color:white;'>
...
It seems running well, however, when I'm trying to access using my browser, nginx tells bt a 502 bad gateway response.
I'm figuring out it can be a problem related with the visibility between a open by a non-containerized process and a container. Can I container stablish connection to a port open by other non-container process?
EDIT
Logs where upstream { server 127.0.0.1:8080; }:
2016/07/13 09:06:53 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:06:53 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Logs where upstream { server 0.0.0.0:8080; }:
62.57.217.25 - - [13/Jul/2016:09:00:30 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-" 2016/07/13 09:00:30 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client:
62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com" 2016/07/13 09:00:32 [error] 5#5: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:00:32 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Any ideas?
The Problem
Localhost is a bit tricky when it comes to containers. Within a docker container, localhost points to the container itself.
This means, with an upstream like this:
upstream foo{
server 127.0.0.1:8080;
}
or
upstream foo{
server 0.0.0.0:8080;
}
you are telling nginx to pass your request to the local host.
But in the context of a docker-container, localhost (and the corresponding ip addresses) are pointing to the container itself:
by addressing 127.0.0.1 you will never reach your host machine, if your container is not on the host network.
Solutions
Host Networking
You can choose to run nginx on the same network as your host:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d --net=host nginx
Note that you do not need to expose any ports in this case.
This works though you lose the benefit of docker networking. If you have multiple containers that should communicate through the docker network, this approach can be a problem. If you just want to deploy nginx with docker and do not want to use any advanced docker network features, this approach is fine.
Access the hosts remote IP Address
Another approach is to reconfigure your nginx upstream directive to directly connect to your host machine by adding its remote IP address:
upstream foo{
//insert your hosts ip here
server 192.168.99.100:8080;
}
The container will now go through the network stack and resolve your host correctly:
You can also use your DNS name if you have one. Make sure docker knows about your DNS server.
For me helped this line of code proxy_set_header Host $http_host;
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://myserver;
}
Just to complete other answers, I'm using mac for development and using host.docker.internal directly on upstream worked for me and no need to pass the host remote IP address. Here is config of the proxy nginx:
events { worker_connections 1024; }
http {
upstream app1 {
server host.docker.internal:81;
}
upstream app1 {
server host.docker.internal:82;
}
server {
listen 80;
server_name app1.com;
location / {
proxy_pass http://app1;
}
}
server {
listen 80;
server_name app2.com;
location / {
proxy_pass http://app2;
}
}
}
As you can see, I used different ports for different apps behind the nginx proxy. I used port 81 for the app1 and port 82 for the app2 and both app1 and app2 have their own nginx containers:
For app1:
docker run --name nginx -d -p 81:80 nginx
For app2:
docker run --name nginx -d -p 82:80 nginx
Also, please refer to this link for more details:
docker doc for mac
What you can do is configure proxy_pass that from container perspective the adress will be pointing to your real host.
To get host address from container perspective you can do as following on Windows with docker 18.03 (or more recent):
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
Then you can change nginx config to:
proxy_pass http://192.168.65.2:{your_app_port};
and it should work fine.
Remember to provide the same port as your local application runs with.
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
location / {
uwsgi_pass django;
include /path/to/your/mysite/uwsgi_params; # the uwsgi_params file you installed
}
complete reference: https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
nginx.sh
ip=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -n 1)
docker run --name nginx --add-host="host:${ip}" -p 80:80 -d nginx
nginx.conf
location / {
...
proxy_pass http://host:8080/;
}
It‘s works for me
I had this issue and it turned out to be an issue with the docker container not starting up due to a permissions issue.
In my case running
docker-compose ps
showed that the container had not started and exited with status 1. Turns out the permissions had been lost in migrating to a new machine. Adjusting the permissions to a know staff user on the parent directory fixed the problem for me and I was then able to start docker service where as previously I was getting
nginx_1_c18a7f6f7d6d | chown: /var/www/html: Operation not permitted

Dockerized Nginx upstream error serving separate Docker container with Flask/uWSGI app

I am experiencing the following error with my multi-container Docker setup after running docker-compose build && docker-compose up and attempting to hit my index page:
[error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.99.1, server: localhostz, request: "GET / HTTP/1.1", upstream: "uwsgi://172.17.0.39:8000", host: "192.168.99.100"
Here is my docker-compose.yml:
web:
restart: always
build: ./web-app
expose:
- "8000"
command: /usr/local/bin/uwsgi --ini sample-uwsgi.ini
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
links:
- web:web
nginx/Dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
ADD sample-nginx.conf /etc/nginx/conf.d/
nginx/sample-nginx.conf
upstream flask {
server web:8000;
}
server {
listen 80;
server_name localhostz;
charset utf-8;
client_max_body_size 75M;
location / {
uwsgi_pass flask;
include uwsgi_params;
}
}
web-app/Dockerfile
FROM ansible/ubuntu14.04-ansible:stable
WORKDIR /root
ADD application.py application.py
ADD requirements.txt requirements.txt
ADD sample-uwsgi.ini sample-uwsgi.ini
ADD ansible /srv/ansible
WORKDIR /srv/ansible
RUN ansible-playbook container-bootstrap.yml -c local
web-app/sample-uswgi.ini
[uwsgi]
module = application
callable = app
master = true
processes = 5
socket = web:8000
chown-socket = www-data:www-data
vacuum = true
enable-threads=True
die-on-term = true
Please do not post suggestions regarding a single container setup. I am doing as an exercise in being able to scale Docker app containers served under a single nginx container.
Secret sauce was changing the socket line in sample-uwsgi.ini to:
socket = 0.0.0.0:8000

curl Failed to connect to localhost port 80

My host's file maps 127.0.0.1 to localhost.
$ curl -I 'localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
And then
$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.2.4
Date: Wed, 09 Apr 2014 04:20:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Oct 2012 21:48:34 GMT
Connection: keep-alive
Accept-Ranges: bytes
In my host's file I have
127.0.0.1 localhost
It appears that the curl command fails to recognize entries in /etc/hosts. Can someone explain why?
update: I've yet to try this but I've discovered you can configure nginx to respond to ipv4 and ipv6
Since you have a ::1 localhost line in your hosts file, it would seem that curl is attempting to use IPv6 to contact your local web server.
Since the web server is not listening on IPv6, the connection fails.
You could try to use the --ipv4 option to curl, which should force an IPv4 connection when both are available.
If anyone else comes across this and the accepted answer doesn't work (it didn't for me), check to see if you need to specify a port other than 80. In my case, I was running a rails server at localhost:3000 and was just using curl http://localhost, which was hitting port 80.
Changing the command to curl http://localhost:3000 is what worked in my case.
In my case, the file ~/.curlrc had a wrong proxy configured.
I also had problem with refused connection on port 80. I didn't use localhost.
curl --data-binary "#/textfile.txt" "http://www.myserver.com/123.php"
Problem was that I had umlauts äåö in my textfile.txt.
I've encountered the same error before on my load balancer server. Am working with two servers and a load balancer. on my two servers I have NginX running, but on load balancer I have Haproxy. I discoved that accidentally Nginx was installed on the LB. Solution:
sudo rm /etc/nginx/nginx.conf
sudo rm -rf /etc/nginx/
sudo apt purge nginx
sudo service haproxy restart
Then run the command again curl -I localhost
In my case the problem was resolved when I added to nginx.conf: resolver, events, root and upstream (may be it will be useful for somebody). My nginx.conf (after I fixed the error):
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose file:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
Sometimes You need to make sure the web server is running. In my case, I had forgotten starting the nginx webserver after I had stopped it.
sudo service nginx status
To start it if off:
sudo service nginx start
Note: Replace nginx with apache2 if it is what you are using
curl localhost

Resources