Setting up reverse proxy for organizr (nginx) on docker - nginx

I have tried a lot of searching and trial and errors to set up reverse proxy for organizr but it seems to not work. need help from you experts.
My Setup: All on Dockers
nginx: docker instance setup to serve my multiple domains using wordpress.
organizr: separate docker instance for my media. The config files are in separate folder. I can access this locally without any issues. I intend to put this behind one of my domains as a subdomain ex. organizr.domain.com.
with below config, I always get upstream timed out (110: Connection timed out). I have followed many tutorials but may have mixed up the config now. I would appreciate if someone could help with clear instructions to set this up.
conf file in nginx's folder
# Organizr Subdomain
upstream organizr_backend {
server 192.168.X.X:9999;
}
server {
listen 443 ssl http2;
server_name organizr.domain.co.uk;
ssl_certificate /etc/letsencrypt/live/domain.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.co.uk/privkey.pem;
ssl_prefer_server_ciphers on;
error_log /var/log/nginx/organizr.domain.co.uk.error.log error;
access_log /var/log/nginx/organizr.domain.co.uk.access.log main;
location / {
proxy_pass http://organizr_backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Searched google for many combinations, but i am unable to get the desired results.

Related

prestashop under docker with reverse proxy URL subfolder problem

i need some help with my configuration.
I followed the example already listed here => Deploy existing Prestashop to server using Docker
In order to build a prestashop using docker. The problem is that i have in my server a revers proxy configured like this :
server {
listen 80;
listen 443 ssl http2;
server_name example1.test;
# Path for SSL config/key/certificate
ssl_certificate /etc/ssl/certs/nginx/example1.test/example1.crt;
ssl_certificate_key /etc/ssl/certs/nginx/example1.test/example1.key;
include /etc/nginx/includes/ssl.conf;
location /shop {
include /etc/nginx/includes/proxy.conf;
proxy_pass http://x.x.x.x:9001;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
Options "proxy.conf" that i'm including are :
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_intercept_errors on;
to be simple when i access "example1.test/shop" prestashop is redirecting me to the root as it doesn't know the /shop path so im getting 404 error because reverse proxy dont know / too.
example1.test/shop in the browser => redirect to example1.test/ which is not defined in the reverse proxy
i tried all things on internet to configure prestashop in order to recognize the /shop and redirection follow /shop/... but nothing works/
I think configuring prestashop is illogic as it is installed on the / of the docker container. I must change somthing in my reverse proxy to fix it like rewreting response from prestashop container to /shop or something like that.
Any ideas please ?

Nginx Bad Gateway 502 when accessing istio-envoy deployed on kubernetes

My web application is running on One Server and two worker nodes
my nginx config file is
server {
listen ip-address:80 ;
server_name subdomain.domain.com;
server_name www.subdomain.domain.com;
server_name ipv4.subdomain.domain.com;
location / {
proxy_pass http://ip-address:32038/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
fastcgi_read_timeout 3000;
}
}
server {
listen ip-address:443 ssl http2;
server_name subdomain.domain.com;
server_name www.subdomain.domain.com;
server_name ipv4.subdomain.domain.com;
ssl_certificate /opt/psa/var/certificates/scf83NyxP;
ssl_certificate_key /opt/psa/var/certificates/scf83NyxP;
ssl_client_certificate /opt/psa/var/certificates/scfrr8L8y;
proxy_read_timeout 60;
location / {
proxy_pass https://ip-address:30588/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
my website on http://subdomain.mydomain.com is running fine . but when i use https://subdomain.mydomain.com it displays bad gateway error page server by nginx
through ssh when i run following command everything works fine
For http
curl -v -HHost:subdomain.mydomain.com http://ip-address:32038
curl -v subdomain.mydomain.com
For https
curl -v -HHost:subdomain.mydomain.com https://subdomain.mydomain.com:30588
From server node SSH
curl -v -HHost:subdomain.mydomain.com --resolve subdomain.mydomain.com:30588:ip-address --cacert /opt/psa/var/certificates/scf83NyxP https://subdomain.mydomain.com:30588
Any help will be really appreciated.
Thanks
Without knowing anything about the backend service, I would guess that perhaps it is not equiped for HTTPS. You may simply need to change this line...
proxy_pass https://ip-address:30588/;
to...
proxy_pass http://ip-address:30588/;
If the backend service does in-fact need to be called by https (unusual), then we would need to see how that service in configured, as the nginx error suggests that it is not correctly processing the SSL connection.
502 Bad Gateway in Nginx commonly occurs when Nginx runs as a reverse proxy, and is unable to connect to backend services. This can be due to service crashes, network errors, configuration issues, and more. How do we pinpoint the issue? We need to look at what is returning an invalid response to nginx.
Assuming nginx errored because of configuration issues ---
I have run into a 502 Bad Gateway - nginx simply because I had inconsistencies with white space on my config file.
Probably the result of copy/pasting your config file code here, but there are spacing inconsistencies that could trigger a parsing fail for the file.
i.e. My 502 bad gateway - nginx error was solved by deleting a space that I had accidentally added in front of a line in the config file.

Nginx Sub domain setup

I'm trying to setup Nginx so I can have sub domains like
www.MySite.com - Main website (Works correctly)
jenkins.MySite.com - sub domain for Jenkins
gitlab.MySite.com - sub domain for Gitlab
I've tried following various tutorials and I seem to have included everything required to make this work, but still to no avail.
I've followed this: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins
and various other sources online.
[Nginx Server Block]
I've edited my nginx.conf file, I've created a new nginx/sites-available conf file for Jenkins and symlinked it to sites-enabled.
This is my default jenkins JENKINS_ARGS
JENKINS_ARGS="--webroot=/var/cache/jenkins/war --httpListenAddress=127.0.0.1 --httpPort=$HTTP_PORT -ajp13Port=$AJP_PORT"
This is an example of my jenkins server block in nginx
server
{
listen 80;
return 301 https://$host$request_uri;
}
server
{
listen 443;
server_name jenkins.MySite.com;
#ssl_certificate /etc/nginx/cert.crt;
#ssl_certificate_key /etc/nginx/cert.key;
#ssl on;
#ssl_session_cache builtin:1000 shared:SSL:10m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
#ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins/access.log;
location /
{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 90;
proxy_redirect http://127.0.0.1:8080 https://jenkins.MySite.com;
}
}
I've also created an A record in DigitalOcean - Network
and also a CNAME
Much help would be appreciated.
Thanks
All these 3-setups need separate ngnix config files and supervirosor files as you did for main site. make soft link of those files and put them in respective etc/nginx/sites-avai and sites-enable and also soft link the supervisor files to etc/supervisor/conf.d
To check whether the nginx file is properly configured, you need to test it.
sudo nginx -t

WSO2 APIM: Subdomains for different contexts

We have the WSO2 API Manager 1.10.0 deployed and working. Although we are trying to figure out if it is possible to have multiple subdomains for it.
For example:
store.domain.com
publisher.domain.com
carbon.domain.com
Is this at all possible? We've seen this https://docs.wso2.com/display/Carbon442/Adding+a+Custom+Proxy+Path, but this is for different applications, we want to do this only with the API Manager.
In front of the API Manager, we are using nginx with reverse proxy. Below, you can find a snippet from nginx to help while understanding the problem.
server {
listen 80;
server_name store.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
ssl on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH";#:AES128+EDH";
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000";
server_name store.domain.com;
ssl_certificate /etc/nginx/ssl/domain.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/domain.com/self-ssl.key;
access_log /var/log/nginx/store.log;
underscores_in_headers on;
location / {
proxy_pass http://wso2server:9443/store/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When attempting to access with HTTP (for the store context) all works fine, but as soon as we switch this over to HTTPS this fails with the following error in nginx upstream prematurely closed connection while reading response header from upstream however, we see nothing in API Manager logs.
Thanks in advance!
Best Regards
You can solve your issue by one of following methods.
Adding proxy_redirect configs to Nginx. So nginx will rewrite all the URLs to proper URL. Please refer the following config segment.
proxy_redirect http://wso2server/ http://store.domain.com/;
Also you can achieve the same by adding reverse proxy configurations in API Manager store. To do this Open "repository/deployment/server/jaggeryapps/store/site/conf/site.json" and see the following config section
"reverseProxy" : {
"enabled" : false, // values true , false , "auto" - will look for X-Forwarded-* headers
"host" : "sample.proxydomain.com", // If reverse proxy do not have a domain name use IP
"context":"",
//"regContext":"" // Use only if different path is used for registry
},

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Resources