configure nginx as a webserver with puma for application without domain - nginx

I have a cloud server and my application is on my server. I currently do not have a domain but i would like to use nginx as a webserver for my application. So i configured in the nginx the following for this application.
I do not have a domain so example.com is just used as a proxy_pass url but it doesnt seem to be working.
/etc/nginx/sites-enabled/example.com
upstream puma_example {
server unix:///home/deploy/sites/example.com/shared/tmp/sockets/example.sock;
}
server {
listen 80 ;
server_name example.com;
gzip on;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 1100;
gzip_buffers 64 8k;
gzip_comp_level 3;
gzip_proxied any;
gzip_types text/css text/xml application/x-javascript application/atom+xml text/mathml text/plain text/vnd.sun.j2me.app-descriptor text/vnd.wap.wml text/x-component;
root /home/deploy/sites/example.com/current/public;
access_log /home/deploy/sites/example.com/current/log/nginx.access.log;
error_log /home/deploy/sites/example.com/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma_example;
location #puma_example {
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_example;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 25M;
keepalive_timeout 10;
}
When i run my rails application i can see that is running but i cant seem to link it via nginx. So when i so http://example.com, it doesnt load my application.
What am i missing? how do i get my application to connect locally via nginx with no domain name on my cloud server.
Any help is appreciated.

You can use your ip address, here is a simplified example.
upstream 127.0.0.1 {
server unix:///var/run/yourapp.sock;
}
server {
listen 80;
server_name 127.0.0.1;
root /var/www/yourapp/current/public;
location / {
proxy_pass http://127.0.0.1/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Related

Custom Nginx error page don't work in production

I am trying of custom the Nginx error page.
1/ I created a html page test (this name 400.html)
2/ I created a custom-error-page.conf file in the /etc/nginx/snippets/ directory :
error_page 400 /400.html;
location = /400.html {
root /var/www/apps/site_name/error_pages;
internal;
}
3/ I include custom-error-page.conf in the Nginx configuration (sites-available):
upstream site_name {
ip_hash;
server 0.0.0.0:8000;
}
server {
if ($host = site_name.domain_name.fr) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name site_name.domain_name.fr;
server_tokens off;
include /etc/nginx/domain_name_conf.d/acme_challenge.conf;
set $error_pages_root /var/www/apps/site_name/error_pages;
client_max_body_size 512M;
include /etc/nginx/domain_name_conf.d/503.conf;
location ~ /.+ {
return 301 https://$server_name$request_uri;
}
}
server {
include /etc/nginx/snippets/custom-error-page.conf;
include /etc/nginx/domain_name_conf.d/ssl_generic.conf;
server_name site_name.domain_name.fr;
root /var/www/apps/sources/site_name;
set $error_pages_root /var/www/apps/site_name/error_pages;
ssl_client_certificate /etc/nginx/private/ca.pem;
ssl_verify_client optional;
if ($ssl_client_verify != SUCCESS) {
return 400;
}
ssl_certificate /etc/letsencrypt/live/site_name.domain_name.fr/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site_name.domain_name.fr/privkey.pem;
access_log /var/log/nginx/site_name.domain_name.fr/access.log;
error_log /var/log/nginx/site_name.domain_name.fr/error.log;
client_max_body_size 512M;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml image/png;
include /etc/nginx/domain_name_conf.d/acme_challenge.conf;
include /etc/nginx/domain_name_conf.d/503.conf;
location ~ ^/(static|media)/ {
return 301 https://site_name.fr$request_uri;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl $scheme;
proxy_redirect off;
proxy_set_header X-Scheme $scheme;
if (!-f $request_filename){
proxy_pass http://site_name;
break;
}
}
}
4/ Nginx -t return ok
5/ I restart the Nginx service - systemctl restart nginx.service
The 400 page is always the Nginx 400 page.
I found the solution to my problem.
I edit the custom-error-page.conf :
error_page 400 #bad_request;
location #bad_request {
root /var/www/apps/site_name/error_pages;;
rewrite ^(.*)$ /400.html break;
}
My include of custom-error-page.conf was placed in the wrong place.
I indicated it in all the servers and after the return of error 400.

nginx: [emerg] host not found in upstream "source.blog.demo.com" in /etc/nginx/conf.d/blog.demo.com.conf:14

This is my nginx configuration:
upstream itw_upstream {
server 127.0.0.1:2019 fail_timeout=3s;
}
proxy_cache_path /var/cache/nginx/mpword levels=2:2 keys_zone=itw_cache:10m inactive=300d max_size=1g;
proxy_temp_path /var/cache/nginx/tmp;
#
# the reverse proxy server as www
#
server {
listen 80;
server_name blog.demo.com;
root /opt/mpword/none;
access_log /opt/mpword/log/www_access.log;
error_log /opt/mpword/log/www_error.log;
client_max_body_size 2m;
gzip on;
gzip_min_length 1024;
gzip_buffers 4 8k;
gzip_types text/css application/x-javascript application/json;
sendfile on;
location = /favicon.ico {
proxy_pass http://source.blog.demo.com; // this is the line 14.
}
location = /robots.txt {
proxy_pass http://source.blog.demo.com;
}
location ~ /static/ {
rewrite ^(.*) http://static.blog.demo.com$1 permanent;
}
location ~ /files/ {
rewrite ^(.*) http://static.blog.demo.com$1 permanent;
}
location / {
proxy_pass http://itw_upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
#
# the source server that serves static files and uploaded files
#
server {
listen 80;
server_name source.blog.demo.com;
root /opt/mpword/../src/main/resources;
access_log /opt/mpword/log/source_access.log;
error_log /opt/mpword/log/source_error.log;
client_max_body_size 1m;
gzip on;
gzip_min_length 1024;
gzip_buffers 4 8k;
gzip_types text/css application/x-javascript application/json;
sendfile on;
location ~ /static/ {
}
location ~ /files/ {
proxy_pass http://itw_upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache itw_cache;
proxy_cache_key $uri;
proxy_cache_valid 1d;
expires 1d;
}
}
#
# the simulated CDN server
#
server {
listen 80;
server_name static.blog.demo.com;
access_log /opt/mpword/log/static_access.log;
error_log /opt/mpword/log/static_error.log;
client_max_body_size 1m;
gzip on;
gzip_min_length 1024;
gzip_buffers 4 8k;
gzip_types text/css application/x-javascript application/json;
sendfile on;
location ~ /static/ {
add_header "Access-Control-Allow-Origin" "http://blog.demo.com";
add_header "Access-Control-Allow-Methods" "GET, POST";
proxy_pass http://source.blog.demo.com;
proxy_read_timeout 3s;
}
location ~ /files/ {
add_header "Access-Control-Allow-Origin" "http://blog.demo.com";
add_header "Access-Control-Allow-Methods" "GET, POST";
proxy_pass http://source.blog.demo.com;
proxy_read_timeout 3s;
}
}
but when I run nginx -t I get error:
nginx: [emerg] host not found in upstream "source.blog.demo.com" in /etc/nginx/conf.d/blog.demo.com.conf:14
nginx: configuration file /etc/nginx/nginx.conf test failed
I didn't find the issue? how to solve it?
This is your config:
server {
server_name blog.demo.com;
location = /favicon.ico {
proxy_pass http://source.blog.demo.com; # here you ask for a upstream server
}
}
server {
server_name source.blog.demo.com; # here you spin up the upstream server
}
Didn't test it, but I think the problem is, that nginx is looking for upstream servers given in proxy_pass. But the server source.blog.demo.com is not running at the time where the proxy_pass directive is loaded in the server blog.demo.com.
So 1.) make the second server the first one to spin up and the third the second. Finally run the blog.demo.com.
Or 2.) use the workaround preventing nginx checking for upstream servers in proxy_pass:
server {
resolver 8.8.8.8 valid=30s; # or any other reachable DNS, e.g. 127.0.0.11 for docker
location = /favicon.ico {
set $upstream_source source.blog.demo.com;
proxy_pass http://$upstream_source/favicon.ico;
}
}

NGINX shows "bad gateway" when upstream server restart and not back to normal

Every time when I'm restart the upstream server, my NGINX shows "bad gateway" which is ok, but later, when the upstream server restarts nginx not recover automatically and I need to restart it (the nginx) manually.
Is there an option to make nginx to check every few seconds if the upstream backed to normal?
upstream core {
server core:3001;
}
server {
server_name core.mydomain.com corestg.mydomain.com www.core.mydomain.com;
#listen 80;
#listen [::]:80;
gzip on;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
gzip_proxied any;
#gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
listen 443 ssl http2;
listen [::]:443 ssl http2;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
server_tokens off;
ssl_certificate /etc/ssl/domain.crt;
ssl_certificate_key /etc/ssl/domain.rsa;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
proxy_ssl_session_reuse off;
proxy_pass http://core;
proxy_buffers 8 24k;
proxy_buffer_size 2k;
proxy_http_version 1.1;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-NginX-Proxy true;
# proxy_set_header Host $http_host;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
Seems that NGINX does not do the auto recovery by default.
Changing the config part from:
upstream core {
server core:3001;
}
to:
{
server core:3001 max_fails=1 fail_timeout=1s;
server core:3001 max_fails=1 fail_timeout=1s;
}
did the trick. the duplication is not mistake. Nginx tries to resolve the first line, on failure it will try the second one (circularly).
My setup to test NGINX:
Docker-Container simulating the backend exposing port 9002.
afd9551abc54 nginx "/docker-entrypoint.…" About a minute ago Up 11 seconds 0.0.0.0:9002->80/tcp laughing_pike
NGINX configuration
# Defined upstream block.
upstream backend {
server 127.0.0.1:9002;
}
#Main Server block
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
}
}
Stopping the container will result in 502 Bad Gateway. Starting the container without restarting / reloading NGINX sends the data to the upstream server. So basically that should just work!

PWA (Angular) with Nginx reverse proxy inside a Docker

I have a Nginx server configured to serve requests using a proxy pass from a docker container. It works fine, but the PWA is not working
Inside the docker:nginx container I have a Nginx.conf like this to serve my angular inside a container
user nginx;
worker_processes 4;
events {
worker_connections 1024;
}
http {
server {
listen 0.0.0.0:8080;
listen [::]:8080;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml text/javascript;
gzip_buffers 16 8k;
client_max_body_size 256M;
add_header X-Content-Type-Options nosniff;
default_type application/octet-stream;
include /etc/nginx/mime.types;
location / {
index index.html;
root /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html;
}
}
}
I have a main server that proxies the browser request to the above docker container containing my angular application
It has a server block like this
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/ssl/certs/my-site.com.ca-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/private/my-site.com.key;
server_name my-site.com;
root /usr/share/nginx/html;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_buffers 16 8k;
client_max_body_size 256M;
#serves my api
location ~ ^/(api|server-assets)/ {
proxy_pass http://localhost:9090;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "gzip";
gzip on;
gzip_proxied auth;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
}
# Serves my angular ****** This might be an issue as I am proxy passing to a non https item
location / {
gzip on;
gzip_proxied any;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "gzip";
proxy_pass http://localhost:8080;
proxy_buffer_size 128k;
proxy_buffers 32 32k;
proxy_busy_buffers_size 128k;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I have tried everything the service worker is running on my browser but refuses to cache the static files to the browser storage for pwa to work offline
Best guess is that my main nginx is ssl enabled and my proxy pass is not, how can I forward my ssl to the docker proxy pass for ui container proxy_pass http://localhost:8080;
Please any help is appreciated
You need to forward SockJS as well. Try adding this to your server block:
location ^~ /sockjs-node/ {
proxy_pass http://localhost:8080;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-forward-for $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
}

Nginx - How to run multiple instance of Odoo with different subdomain names

I'd like to run two instance of Odoo v10 on different links.
the 1st instance will include multiple databases for our testing purposes running on this link mydoamin.com
And for the second instance will be holding demo databases for our clients to demonstrate Odoo for them on this link clients.mydomain.com
Both instances should be running on the same server.
I did a lot of research to figure out how to achieve this approach, but I didn't find any guide can help me to do it by using Nginx reverse proxy.
Here's my Nginx configuration file:
upstream backend-odoo {
server 127.0.0.1:8069;
}
upstream backend-odoo-im {
server 127.0.0.1:8072;
}
server {
listen 80;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://example.com$request_uri? permanent;
}
server {
listen 443 default;
# ssl settings
ssl on;
ssl_certificate
/etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
keepalive_timeout 60;
#increase the upload file size limit
client_max_body_size 300M;
# proxy header and settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
# odoo log files
access_log /var/log/nginx/odoo-access.log;
error_log /var/log/nginx/odoo-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500
http_502 http_503;
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
location / {
proxy_pass http://backend-odoo;
}
location ~* /web/static/ {
# cache static data
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://backend-odoo;
}
location /longpolling {
proxy_pass http://backend-odoo-im;
}
}
PS. I tried to set db filter = ^%d$ in odoo configuration file but it I get nothing.
Try dbfilter = %h$ that works for me better.
You have to rename your databases that it matches the URL's.
yourdomain.com get yourdomain_com as DB name.

Resources