I have an inner server that runs my application. This application runs on port 9001. I want people access this application through nginx which runs on an Ubuntu machine that runs on DMZ network.
I have built nginx from source with the options of sticky and SSL modules. It runs fine but does not do the proxy pass.
The DNS name for the outer IP of the server is: bd.com.tr and I want people to see the page http://bd.com.tr/public/control.xhtml when they enter bd.com.tr but even tough nginx redirects the root request to my desired path, the application does not show up.
My nginx.conf file is:
worker_processes 4;
error_log logs/error.log;
worker_rlimit_nofile 20480;
pid logs/nginx.pid;
events {
worker_connections 1900;
}
http {
include mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
keepalive_timeout 75;
rewrite_log on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 150;
server {
listen 80;
client_max_body_size 300M;
location = / {
rewrite ^ http://bd.com.tr/public/control.xhtml redirect;
}
location /public {
proxy_pass http://BACKEND_IP:9001;
}
}
}
What might I be missing?
It was a silly problem and I found it. The conf file is correct so you can use it if you want and the problem was; The port 9001 of the BACKEND_IP was not forwarded and thus nginx was not able to reach the inner service. After forwarding the port, it worked fine. I found the problem in error.log so if you encounter such problem please check error logs first :)
Related
I'm working on Dockerizing a webpack application which supports hot-module replacement. Since I added an nginx front-end, I'm having trouble getting the hot-module-replacement to connect. Nginx serves the page, but the js bundle can't connect to the webpack-dev-server running in another Docker container.
The two things I think the problem could be stemming from is a domain resolution problem (between the Docker containers and nginx) or the request is missing the right upgrade / host headers.
The source code for this project is here.
I have two docker containers in this project:
app-webpack - A webpack-dev-server which serves the website
app-nginx - The reverse-proxy
My nginx config files are in docker/nginx.
Ideally, the user goes to localhost, which nginx picks up and redirects to app-webapp:3000. The webpack-dev-server then sends the hot-module-replacement code via the socketjs-node socket address and the page updates locally.
I've confirmed that the app-webpack container can serve a HMR capable page.
Thanks in advance for the help and let me know if there's additional info I can provide!
It took a little bit of tinkering but it turns out I was upgrading the headers incorrectly. For anyone referencing this in the future, check out the linked repo for the full source code.
Here's my etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# The server settings
include /etc/nginx/conf.d/*.conf;
}
And the etc/nginx/conf.d/default.conf/
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
}
server {
server_name localhost;
listen 80;
location /sockjs-node/ {
proxy_pass https://app-webapp:3000/sockjs-node/;
}
location / {
proxy_pass http://app-webapp:3000;
}
}
I made a website in MVC Core and tried to publish it to the web on a CentOS 7 VPS. It runs well, when I curl it it responds. Then i installed nginx and it showed the default page, when trying it from my computer. Then i changed nginx.conf to the below one and all i get is 502 bad gateway. In the nginx log i see only that a get request was received. Any ideas what should i check?
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
# include /etc/nginx/conf.d/*.conf;
server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
I tried apache and had the same problem. Then i found the solution, you have to set httpd_can_network_connect.
http://sysadminsjourney.com/content/2010/02/01/apache-modproxy-error-13permission-denied-error-rhel/
A didn't find the error message in the audit blog that the author was talking about but i tried his solution and it worked.
I have used centos for 4 days now and it's the second time i have to set a bit to solve a problem. These solutions are quite hidden in the web and most articles dealing with the area doesn't mention those so i lost a lot of time. So i share the opinion of the author about SELinux. Probably i will try another linux distribution.
What is also interesting that I followed the official microsoft tutorial "Set up a hosting environment for ASP.NET Core on Linux with Apache, and deploy to it". The operating system that they use is CentOS too. And it doesn't mention this bit either.
I have a Spring boot application running on embedded Tomcat running on Vagrant CentOS box. It running on port 8080, so I can access application in web browser.
I need to set up Nginx proxy server that listen to port 80 and redirect it to my application.
I'm getting this error it the Nginx log:
[crit] 2370#0: *14 connect() to 10.0.15.21:8080 failed (13: Permission
denied) while connecting to upstream, client: 10.0.15.1, server: ,
request: "GET / HTTP/1.1", upstream: "http://10.0.15.21:8080/", host:
"10.0.15.21"
All the set up examples looks pretty much similar and the only answer I could find that might help was this one. However it doesn't change anything.
Here is my server config located in /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80;
location / {
proxy_pass http://10.0.15.21:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And here is my nginx.conf file'
user root;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}
Don't know if this is related, but under journalctl -u nginx I can see this log.
systemd1: Failed to read PID from file /run/nginx.pid: Invalid
argument
centos has SELinux enabled by default.
You would need to turn if off by running
setsebool httpd_can_network_connect on
There are some information about this on internet if you want to learn more. to make it persistent you can run
setsebool -P httpd_can_network_connect on
I have kibana listening on localhost:5601 and if I SSH tunnel to this port I can access kibana in my browser just fine.
I have installed nginx to act as a reverse proxy but having completed the setup all I get is 502 Bad Gateway. The more detailed error in the nginx error log is
*1 upstream prematurely closed connection while reading response header from upstream,
client: 1.2.3.4,
server: elk.mydomain.com,
request: "GET /app/kibana HTTP/1.1",
upstream: "http://localhost:5601/app/kibana"
My nginx config is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.fedora.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
}
My kibana.conf file within /etc/nginx/conf.d/ is:
server {
listen 80 default_server;
server_name elk.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
}
}
This is a brand new Amazon Linux EC2 instance with the latest versions of kibana and nginx installed.
Has anyone encountered this problem before? I feel like it's a simple nginx config problem but I just can't see it.
It turns out that the slashes before the dollars proxy_set_header Upgrade \$http_upgrade; were a result of a copy-paste from another configuration management tool.
I removed the unnecessary slashes to make proxy_set_header Upgrade $http_upgrade; and reclaimed my sanity.
I have a java spring application running on port 8080, this app should return a header 'x-auth-token', this app run behind nginx reverse proxy.
The application correctly produce the header, When i request directly to it (bypassing nginx):
http://169.54.76.123:8080
it responds with the header in the set of response headers
but when i make the request through the nginx reverse proxy, the header does not appear
https://169.54.76.123
nginx handles ssl termination.
my nginx conf file
upstream loadbalancer {
server 169.54.76.123:8080 ;
}
server {
listen 169.54.76.123:80;
server_name api.ecom.com;
return 301 https://api.ecom.com$request_uri;
}
server {
listen 169.54.76.123:443 ssl;
keepalive_timeout 70;
server_name api.ecom.com;
ssl_certificate /etc/ssl/api_chained.cert ;
ssl_certificate_key /etc/ssl/api.key ;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 SSLv3 SSLv2;
ssl_ciphers ALL:!ADH:RC4+RSA:HIGH:!aNULL:!MD5;
charset utf-8;
location / {
proxy_pass http://loadbalancer/$request_uri ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The question is: Why nGinx do not pass the 'x-auth-token' to the response?
How to include it into the response?
I tried to get the value in a variable, but it seems that nGinx do not have it:
I used $sent_http_x_auth_token and $upstream_htto_x_auth_token but these variables does not contain any values (i think)
I tried adding the header myself using these variables:
add_header x-auth-token $sent_http_x_auth_token; with no success
also tried:
add_header x-auth-token $upstream_http_x_auth_token; with no success either.
also, I tried:
proxy_pass_header x-auth-token;
with no success
What is the problem? How can i debug it?
which part prevents or blocks the 'x-auth-header'? the upstream or the proxy or what?
Thanks for any help
Normally you should have to do anything because nginx does not remove custom headers from the response.
You could use the log_format directive to track the execution of the request, for instance with something like
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent '
'X-Auth-Token: "$sent_http_x_auth_token"';
access_log logs/access.log main;
in you location context.
When you check, do you get a 200 response code that confirms the request succeeded?
The sent_ prefix should not be there.
The correct way to log customer headers is to prefix them with http_ and then write the customer header all lowercase and convert dashes (-) to underscores (_). For example 'Custom-Header' would be http_custom_header
So the correct way to log this is:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_x_auth_token"';
access_log logs/access.log main;
Also please be aware that if your header contains underscores you will need to explicitly allow this by adding this in your nginx config:
underscores_in_headers on
If you want to test your custom header, you can use curl:
curl -v -H "Custom-Header: any value that you want" http://www.yourdomain.com/path/to/endpoint
The -H will add the header and -v will show you both the request and response headers that are sent