Nginx as Reverse Proxy for Docker VHosts - wordpress

I'm currently trying to built my own webserver/service and wanted to set up things like this:
Wordpress for the main "blog"
Gitlab for my git repositories
Owncloud for my data storage
I've been using Docker for getting a nice little gitlab running, which works perfectly fine, mapping to port :81 on my webserver with my domain.
What annoys me a bit is, that Docker images are always bound to a specific portnumber and are thus not really easy to remember, so I'd love to do something like this:
git.mydomain.com for gitlab
mydomain.com (no subdomain) for my blog
owncloud.mydomain.com for owncloud
As far as I understood, I need a reverse proxy for this, which I decided to use nginx for. So I set things up like this:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:84;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
server {
listen 80;
server_name git.mydomain.com;
location / {
proxy_pass http://localhost:81;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This way, I have git.mydomain.com up and running flawlessly, but my wordpress just shows me a blank webpage. My DNS is setup like this:
Host Type MX Destination
* A IP
# A IP
www CNAME #
Am I just too stupid or whats going on here?

I know your question is more specifically about your Nginx proxy configuration, but I thought it would be useful to give you this link which details how to set up an Nginx docker container that automagically deploys configurations for reverse-proxying those docker containers. In other words, you run the reverse proxy and then your other containers, and the Nginx container will route traffic to the others based on hostname.
Basically, you pull the proxy container and run it with a few parameters set in the docker run command, and then you bring up the other containers which you want proxied. Once you've got docker installed and pulled the nginx-proxy image, the specific commands I use to start the proxy:
docker run -d --name="nginx-proxy" --restart="always" -p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And now the proxy is running. You can verify by pointing a browser at your address, which should return an Nginx 502 or 503 error. You'll get the errors because nothing is yet listening. To start up other containers, it's super easy, like this:
docker run -d --name="example.com" --restart="always" \
-e "VIRTUAL_HOST=example.com" w3b1x/mywebcontainer
That -e "VIRTUAL_HOST=example.com" is all it takes to get your Nginx proxy routing traffic to the container you're starting.
I've been using this particular method since I started with Docker and it's really handy for exactly this kind of situation. The article I linked gives you step-by-step instructions and all the information you'll need. If you need more information (specifically about implementing SSL in this setup), you can check out the git repository for this software.

Your nginx config look sane, however, you are hitting localhost:xx, which is wrong. It should be either gatewayip:xx or better target_private_ip:80.
An easy way to deal with this is to start your containers with --link and to "inject" the ip via a shell script: have the "original" nginx config with a placeholder instead of the ip, then sed -i with the value from the environment.

Related

Daphne + Supervisor inside Docker Container can't access my application

I'm trying to scale my Django app that uses Daphne server inside Docker container with Supervisor because Daphne has no workers. I read on the internet that it should be done that way but I didn't find any explanation of concept and the documentation is very obscure.
I managed to run it all inside container, logs are okay. I firstly run my app without supervisord with multiple containers and it worked fine. That is, I hosted multiple instance of same app in multiple containers because of redundancy. Then I read that I could run multiple processes of my app using supervisor inside container. So I managed to run app with supervisord and daphne inside container, I get logs that app is running, but I can't access it from my browser as I could when I had only one Daphne process per container without supervisord.
UPDATE:
I can even curl my application inside of container when I use curl localhost:8000, but I can't curl it by container's IP address nor inside, nor outside of the container. That means that it's not visible outside of container despite container's port being exposed in docker-compose file.
I'm getting
502 Bad Gateway
nginx/1.18.0
My supervisord config file looks like this:
[supervisord]
nodaemon=true
[supervisorctl]
[fcgi-program:asgi]
User=root
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/app
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command= /usr/local/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --endpoint fd:fileno=0 --access-log - --proxy-headers WBT.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/appuser/supervisor_log.log
redirect_stderr=true
I can't see why NGINX throws 502 error.
This configuration worked until I introduced supervisor.
My Nginx is also inside its own docker container.
upstream django_daphne{
hash $remote_addr consistent;
server django_daphne_1:8000;
server django_daphne_2:8000;
server django_daphne_3:8000;
}
server {
server_name xxx.yyy.zzz.khmm;
listen 80;
client_max_body_size 64M;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://django_daphne;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api/ {
proxy_pass http://api_app:8888;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Okay! I found out what is a problem.
Instead of
socket=tcp://localhost:8000
it has to be
socket=tcp://0.0.0.0:8000
so that it can be accessed outside of the container.

502 error bad gateway with nginx on a server behind corporate proxy

I'm trying to install a custom service to one of our corporae server (those kind of server that are not connected to internet, unless all the trafic passes to a corporate proxy).
This proxy has been setup with the classic export http_proxy=blablabla in the .bashrc file, among other things.
Now the interesting part. I'm trying to configure nginx to redirect all traffic from server_name to local url localhost:3000
Here is the basic configuration I use. Nothing too tricky.
server {
listen 443 ssl;
ssl_certificate /crt.crt;
ssl_certificate_key /key.key;
server_name x.y.z;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass &http_upgrade;
}
}
When I try to access the server_name, from my browser, I get a 502 error (a nginx error, so the requests hits my server).
When I try to access the local url curl --noproxy '*' https://localhost:3000, from the server, it is working. (I have to write the --noproxy '*' flag because of the export http_proxy=blablabla setup in the .bashrc file. If i do not write this, the localhost reuquest is send to our distant proxy, resulting the requet to fail)
My guess is that this is has to be related to the corporate proxy configuration, but I might been missleading.
Do you have any insights that you could share with me about this?
Thanks a lot !
PS: the issue is not related to any kind of SSL configuration, this part is working great
PS2: I'm not a sysadmin, all these issuses are confusing
PS3: the server I'm working on is a RHEL 7.9
It has nothing to do with proxy, found my solution here :
https://stackoverflow.com/a/24830777/4991067
Thanks anyway

Not able to change NGINX port

The problem I'm having trouble with is that I can't access remotely unless I use port 80 and I want to use a different port.
Here's the NGINX configuration I'm using. This will work on port 80. However, if I change
listen 80;
to
listen 6000;
it does not work when accessed from outside the local machine.
In other words, curl 127.0.0.1:6000 on the machine works. However trying to visit externally with 184.169.100.100:6000 does not work. (Pretending that's my public IP address.) It gives me a "site can't be reached" error in Chrome.
I've checked the security settings to make sure port 6000 is open. It's an AWS EC2 instance.
Optional note to put things in context: Overall what I'm trying to do is set up two different servers on one machine, each accessible from a different port, and each running it's own set of python workers. As a first step, I just want to make sure I can change the port by which a server is accessed, however, I'm not even able to do that yet and still access it externally.
ubuntu#ip-172-31-9-113:/etc/nginx/conf.d$ cat flask.conf
upstream gunicorn_server {
server localhost:8080 fail_timeout=0;
}
server {
listen 80;
server_name 184.169.100.100;
root /home/ubuntu/www;
client_max_body_size 4G;
keepalive_timeout 5;
proxy_read_timeout 900;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
# pass to the upstream gunicorn server mentioned above
proxy_pass http://gunicorn_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Any help is appreciated.
this happen for firewall.
ubuntu:
if your ufw is enabled:
sudo ufw allow 6000
in centos disable selinux (not recommended. search for selinux config for allow port)
Is 6000 port open ?
Check in inbound port rules.
And then try hitting sudo ufw allow 6000 and check if you can access with 6000 also.
(this question is old, but still answering because if someone is facing same issue this answer may help them)
Try to use this command:
firewall-cmd --zone=public --add-port=6000/tcp --permanent
Answering my own question:
I tried using port 6001 instead of 6000 and then it worked. I could access it from outside. I can't say why, but there must have been an issue with 6000 for my particular case.

Configuring a Nginx in front of my front and back end on Kubernetes

I have been having problems trying to deploy my web app in kubernetes.
I wanted to mimic old deploy with nginx working as reverse proxy in front of my back and front end services.
I have 3 pieces in my system, nginx, front and back. I built 3 deploys, 3 services and exposed only my nginx service using nodePort: 30050.
Without further delays, this is my nginx.conf:
upstream my-server {
server myserver:3000;
}
upstream my-front {
server myfront:4200;
}
server {
listen 80;
server_name my-server.com;
location /api/v1 {
proxy_pass http://my-server;
}
location / {
proxy_pass http://my-front;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
I tried to install curl and nslookup inside one of the pods and tried to do manual request on cluster internal endpoints... tears came to my eyes, everything is working...i am almost a developer worthy of the cloud.
Everything is working smoothly...everything but nginx DNS resolution.
If i do kubectl exec -it my-nginx-pod -- /bin/bash and try to curl 1 of the other 2 services: curl myfront:4200 it works properly.
If i try to nslookup one of them it works as well.
After this i tried to replace, in nginx.conf, the service names with the pods IPs. After restarting the nginx service everything was working.
Why doesn't nginx resolve the upstream names properly?
I am going nuts over this.
Nginx caches the resolved IPs. To force Nginx to resolve DNS, you can introduce a variable:
location /api/v1 {
set $url "http://my-server";
proxy_pass $url;
}
More details can be found in this related this answer.
As it is likely a caching in Nginx, what you describe, it would also explain why restarting (or reloading) Nginx will fix the problem. At least for a while until the DNS entry changes, again.
I think, it is not related to Kubernetes. I had the same problem a while ago when Nginx cached DNS entries of AWS ELBs, which frequently change IPs.

Installed gitlab, but only nginx welcome page shows

I installed gitlab using its installation guide. Everything was OK, but when I open localhost:80 in the browser all I see it the message Welcome to nginx!. I can't find any log file with any errors in it.
I am running Ubuntu in VirtualBox. My /etc/nginx/sites-enabled/gitlab config file reads:
# GITLAB
# Maintainer: #randx
# App Version: 3.0
upstream gitlab {
server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen 192.168.1.1:80; # e.g., listen 192.168.1.1:80;
server_name aridev-VirtualBox; # e.g., server_name source.example.com;
root /home/gitlab/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
The nginx documentation says:
Server names are defined using the server_name directive and determine which server block is used for a given request.
That means in your case that that you have to enter aridev-VirtualBox within your browser instead of localhost.
To get this working you have to enter aridev-VirtualBox within your local Hosts file and point it to the IP of your VirtualBox PC.
This would look something like follows:
192.168.1.1 aridev-VirtualBox
I removed /etc/nginx/sites-enabled/default to get rid of that problem.
Try following both orkoden's advice of removing the default site from /etc/nginx/sites-enabled/ but also comment out your listen line since the default implied line there should be sufficient.
Also, make sure that when you make changes to these configurations, shut down both the gitlab and nginx services and start them in the order of gitlab first, followed by nginx.
Your configuration file is right. # /etc/nginx/sites-enabled/gitlab
Maybe I think your gitlab file link is wrong.
So Example:
ln -s /etc/nginx/sites-available/default
/etc/nginx/sites-enabled/gitlab
pls check default content == your /etc/nginx/sites-enabled/gitlab
content
after
Me I changed this line :
proxy_pass http://gitlab;
by this :
proxy_pass http://localhost:3000;
3000 is the port of my unicorn server.
moreover I did a chown root:ngnix on the conf file and it work now.
Old topic, but it may happen when there is a previously installed nginx.
$ gitlab-ctl reconfigure
or restart will not complain but the previous nginx instance may actually running instead of the one under gitlab.
This just happened to me.
Shutdown and disable this old nginx instance and do again:
$ gitlab-ctl reconfigure

Resources