I have an application that consists of back end and front end. Because of restrictions with the hoster, I need to provide the back end from a different server than the front end.
My back end handles authentication and serves the content to the front end. It also sends emails to users via nodemailer. Because I am not allowed have outgoing TCP sockets on the server where the front end is hosted, this feature failed which made me relocate the back end.
Now, I have the back end running on a different server. It consists of a loopback instance listening on a certain port which gets requests proxied to it by nginx.
After a while of set up, I had the configuration working. It first failed because of a wrong CORS header, a problem that emerged because Loopback added Access-Control-Allow-Origin *;, which I also had in my nginx config. That resulted in Firefox throwing an error like CORS header does not match Origin (*, *) - which made me think that the headers where on top of each other thus negating the wildcard *.
So I removed the add_header part from my nginx configuration. I worked fine when I tested, but when I came back, Firefox threw the Error Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://{{my_nice_api}}/lang. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 404.. Which baffled me because I hadn't changed the set up at all.
Now, I fiddled even more but am not able to find the error. I have add_header Access-Control-Allow-Origin *; set (for testing purposes obviously), but I keep getting the error that there is no such header present. This post had me thinking that I needed to add another header in Access-Control-Allow-Credentials true;, but to no avail. Can anybody give any pointer as to what I am missing?
My nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
#ssl_certificate /etc/nginx/certs/cert.pem;
#ssl_certificate_key /etc/nginx/certs/key.pem;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My site.conf (mounted into sites-available):
# Virtual Host configuration for {{my_nice_api}}
#
server {
listen 80;
listen [::]:80;
server_name {{my_nice_api}};
return 301 https://{{my_nice_api}}$uri;
#location / {
# rewrite ^ https://{{my_nice_api}}$request_uri permanent;
#}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/certs/cert.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
#server_name {{my_nice_api}};
location / {
proxy_pass http://localhost:3001/api/;
#proxy_pass_request_headers on;
#proxy_http_version 1.1;
#proxy_cache_bypass $http_upgrade;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto $scheme;
#proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Port $server_port;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Credentials true;
#add_header X-Frame-Options SAMEORIGIN;
}
}
Related
I made webpage using R(shiny) and deployed it on shiny-server. And tried to use NGINX to achieve multi-threaded sort of stuff. I found on some posts that NGINX can also help to achieve concurrency but I don't know how to do it. Could you please help me to do that.
In case I misunderstand the definition of concurrency, my desired result is that when different users accessed to the webpage and use some function at the same time, they don't need to wait in the queue and my server could handle those requests at the same time.
Below is the configuration:
`
user www-data;
worker_processes 4;
worker_rlimit_nofile 20960;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
use epoll;
worker_connections 1024;
accept_mutex on;
accept_mutex_delay 500ms;
multi_accept on;
}
http {
underscores_in_headers on;
aio threads;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
upstream shiny-server {
ip_hash;
server 127.0.0.1:3838;
}
map $http_app_version $app1_url {
"1.0" http://35.78.39.174:3838;
}
server {
aio threads;
listen 80;
listen [::]:80;
server_name 35.78.39.174:3838;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
if ($http_user_agent !~* "MicroMessenger"){
set $app1_url http://35.78.39.174:3838;
}
aio threads;
proxy_pass http://localhost:3838;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header Accept-Encoding '';
proxy_buffering off;
}
location ^~ /mathjax/ {
alias /usr/share/mathjax2/;
}
}
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.*;
server_names_hash_bucket_size 128;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
}
I have also edited the shiny-server configuration:
# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;
sanitize_errors false;
preserve_logs true;
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location / {
# Host the directory of Shiny Apps stored in this directory
site_dir /home/rstudio/;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server/port_3838;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.
directory_index on;
app_init_timeout 1800;
app_idle_timeout 1800;
}
}
`
Really appreciate your help. Thanks a lot.
In case I misunderstand the definition of concurrency, my desired result is that when different users accessed to the webpage and use some function at the same time, they don't need to wait in the queue and my server could handle those requests at the same time.
Could you please how to set the configuration to achieve that?
I'm trying to deploy the tomcat & Nginx server on a single AWS EC2 instance. I have 3 instances & on each instance, I wanted to deploy Nginx & Tomcat server. Below is my configuration file
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
}"
/etc/nginx/conf.d/application.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/lib/tomcat9/webapps/ROOT;
index deploy.html;
location /admin {
try_files $uri $uri/ /deploy.html;
}
location /admin/admin-portal {
alias /opt/tomcat/webapps/ROOT/;
rewrite /admin-portal/(.*) /$1 break;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/;
}
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
My goal is, when I hit http://IP/ or HTTP://IP/admin then it should redirect to deploy.html and when I hit HTTP://IP/admin/admi-portal it should open tomcat server
NOTE: I got success in both conditions except when I hit HTTP://IP/admin/admi-portal then it is opening only HTML page and CSS/png/js files getting 404:not found error
/opt/tomcat/webapps/ROOT/ this is the file path for all tomcat static file CSS/js/png etc
Can anyone help me with this?
Try hitting the compete url of your EC2 instance
<instanceip>:8080/admin/admin-portal/
also,
you can add "/" in the end:-
location /admin/admin-portal/
then try hitting the url with
<instance-ip>:8080/admin/admin-portal
Now you don't need to add "/" at the end
So I was updating my nginx configuration to meet some security checks, to get grade A+ from Qualys SSL Labs.
I do get A+, but one of the warnings I don't understand. I get this:
Though, I do have http redirect to https and it seems its working fine. Does anyone know what would be the cause of this warning?
I found this question: HTTP forwarding PLAINTEXT warning but it talks about apache.
I also found this: https://github.com/ssllabs/ssllabs-scan/issues/154 so it was mentioned that it could be just a bug, but now its not clear (that issue is old).
My nginx configuration:
http {
upstream odoo-upstream {
server odoo:8069 weight=1 fail_timeout=0;
}
upstream odoo-im-upstream {
server odoo:8072 weight=1 fail_timeout=0;
}
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Enable SSL session caching for improved performance
# http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
ssl_session_cache shared:ssl_session_cache:5m;
ssl_session_timeout 24h; # time which sessions can be re-used.
# Because the proper rotation of session ticket encryption key is
# not yet implemented in Nginx, we should turn this off for now.
ssl_session_tickets off;
# Default size is 16k, reducing it can slightly improve performance.
ssl_buffer_size 8k;
# Gzip Settings
gzip on;
# http redirects to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
charset utf-8;
server {
# server port and name
listen 443 ssl http2;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options sameorigin;
add_header X-Content-Type-Options nosniff;
add_header X-Xss-Protection "1; mode=block";
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 200m;
# add ssl specific settings
keepalive_timeout 60;
ssl_certificate /etc/ssl/nginx/domain.bundle.crt;
ssl_certificate_key /etc/ssl/nginx/domain.key;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# increase proxy buffer to handle some Odoo web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
#general proxy settings
# force timeouts if the backend dies
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# set headers
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Let the Odoo web service know that we’re using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto $scheme;
# by default, do not forward anything
proxy_redirect off;
proxy_buffering off;
location / {
proxy_pass http://odoo-upstream;
}
location /longpolling {
proxy_pass http://odoo-im-upstream;
}
# cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location /web/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo-upstream;
}
include /etc/nginx/custom_error_page.conf;
}
include /etc/nginx/conf.d/*.conf;
}
events {
worker_connections 1024;
}
I have an upstream block in an Nginx config file. This block lists multiple backend servers across which to load balance requests to.
...
upstream backend {
server backend1.com;
server backend2.com;
server backend3.com;
}
...
Each of the above 3 backend servers is running a Node application.
If I stop the application process on backend1 - Nginx recognises this, via passive health check, traffic is only directed to backend2 and backend3, as expected.
However, if I power down the server on which backend1 is hosted, Nginx does not recognise that it is offline and continues to attempt to send traffic/requests to it. Nginx still tries to direct traffic to the offline server, resulting in an error: 504.
Can someone shed some light on why this (scenario 2 above) may happen and if there is some further configuration needed that I am missing?
Update:
I'm beginning to wonder if the behaviour I'm seeing is because the above upstream block is located with an HTTP {} Nginx context. If backend1 was indeed powered down, this would be a connection error and so (maybe off the mark here, but just thinking aloud) should this be a TCP health check?
Update 2:
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
upstream backends {
server xx.xx.xx.37:3000 fail_timeout=2s;
server xx.xx.xx.52:3000 fail_timeout=2s;
server xx.xx.xx.69:3000 fail_timeout=2s;
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_certificate …
ssl_certificate_key …
ssl_ciphers …;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
default
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
#server_name ...;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
# SSL configuration
...
# Add index.php to the list if you are using PHP
index index.html index.htm;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
#try_files $uri $uri/ =404;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://backends;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
# Requests for socket.io are passed on to Node on port 3000
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://backends;
}
}
The reason for you to get a 504 is when nginx does HTTP health check it tries to connect to the location(ex: / for 200 status code) which you configured. Since the backend1 is powered down and the port is not listening and the socket is closed.
It will take some time to get timeout exception and hence the 504: gateway timeout.
It's a different case when you stop the application process.The port will not be listening and it will get connection refused which is identified pretty quick and marks the instance as unavailable.
To overcome this you can set fail_timeout=2s to mark the server as unavailable default is 10 seconds.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html?&_ga=2.174685482.969425228.1595841929-1716500038.1594281802#fail_timeout
I set nginx and unicorn on ubuntu14.04 to access my rails app!
but, I access my domain, chrome responsed 'connection refused'
I don't know why...
How can I resolve this problem?
There is my nginx.conf and unicorn file.
【nginx.conf】
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
#include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
【myapp-unicorn】(/etc/nginx/site-enabled/myapp-unicorn)
upstream myapp.com {
#my rails app
server unix:/var/www/rails/myapp/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 35M;
proxy_pass http://myapp.com;
}
}
【unicorn.rb】(/var/www/rails/myapp/config/unicorn.rb)
worker_processes 2
listen File.expand_path('tmp/sockets/unicorn.sock', ENV['RAILS_ROOT'])
stderr_path File.expand_path('log/unicorn.log', ENV['RAILS_ROOT'])
stdout_path File.expand_path('log/unicorn.log', ENV['RAILS_ROOT'])
preload_app true
before_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
old_pid = "#{ server.config[:pid] }.oldbin"
unless old_pid == server.pid
begin
Process.kill :QUIT, File.read(old_pid).to_i
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
Thank you for your patience with my poor English.
Add
this is unicorn.log(/var/www/rails/myapp/log/unicorn.log)
I, [2015-09-05T18:17:20.590239 #10832] INFO -- : Refreshing Gem list
I, [2015-09-05T18:17:22.099133 #10832] INFO -- : unlinking existing socket=/var/www/rails/myapp/tmp/sockets/unicorn.sock
I, [2015-09-05T18:17:22.099389 #10832] INFO -- : listening on addr=/var/www/rails/myapp/tmp/sockets/unicorn.sock fd=11
I, [2015-09-05T18:17:22.115503 #10832] INFO -- : master process ready
I, [2015-09-05T18:17:22.118878 #10836] INFO -- : worker=0 ready
I, [2015-09-05T18:17:22.127008 #10839] INFO -- : worker=1 ready
this is nginx.log(/var/log/nginx/error.log)
The nginx log is none...
If your error is
connect() failed (111: Connection refused) while connecting to upstream
Try running unicorn listen on
listen /tmp/sockets/unicorn.sock
instead of
listen File.expand_path('tmp/sockets/unicorn.sock', ENV['RAILS_ROOT'])
because some times it happens that nginx can't read that socket file due to permissions. It is rather safe if you have socket file inside tmp folder. Also point you NGINX to
server unix:/tmp/sockets/unicorn.sock fail_timeout=0;
instead of
server unix:/var/www/rails/myapp/tmp/sockets/unicorn.sock fail_timeout=0;
Please reply if you still get error
Happy Deployment ;)