I have a java spring application running on port 8080, this app should return a header 'x-auth-token', this app run behind nginx reverse proxy.
The application correctly produce the header, When i request directly to it (bypassing nginx):
http://169.54.76.123:8080
it responds with the header in the set of response headers
but when i make the request through the nginx reverse proxy, the header does not appear
https://169.54.76.123
nginx handles ssl termination.
my nginx conf file
upstream loadbalancer {
server 169.54.76.123:8080 ;
}
server {
listen 169.54.76.123:80;
server_name api.ecom.com;
return 301 https://api.ecom.com$request_uri;
}
server {
listen 169.54.76.123:443 ssl;
keepalive_timeout 70;
server_name api.ecom.com;
ssl_certificate /etc/ssl/api_chained.cert ;
ssl_certificate_key /etc/ssl/api.key ;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 SSLv3 SSLv2;
ssl_ciphers ALL:!ADH:RC4+RSA:HIGH:!aNULL:!MD5;
charset utf-8;
location / {
proxy_pass http://loadbalancer/$request_uri ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The question is: Why nGinx do not pass the 'x-auth-token' to the response?
How to include it into the response?
I tried to get the value in a variable, but it seems that nGinx do not have it:
I used $sent_http_x_auth_token and $upstream_htto_x_auth_token but these variables does not contain any values (i think)
I tried adding the header myself using these variables:
add_header x-auth-token $sent_http_x_auth_token; with no success
also tried:
add_header x-auth-token $upstream_http_x_auth_token; with no success either.
also, I tried:
proxy_pass_header x-auth-token;
with no success
What is the problem? How can i debug it?
which part prevents or blocks the 'x-auth-header'? the upstream or the proxy or what?
Thanks for any help
Normally you should have to do anything because nginx does not remove custom headers from the response.
You could use the log_format directive to track the execution of the request, for instance with something like
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent '
'X-Auth-Token: "$sent_http_x_auth_token"';
access_log logs/access.log main;
in you location context.
When you check, do you get a 200 response code that confirms the request succeeded?
The sent_ prefix should not be there.
The correct way to log customer headers is to prefix them with http_ and then write the customer header all lowercase and convert dashes (-) to underscores (_). For example 'Custom-Header' would be http_custom_header
So the correct way to log this is:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_x_auth_token"';
access_log logs/access.log main;
Also please be aware that if your header contains underscores you will need to explicitly allow this by adding this in your nginx config:
underscores_in_headers on
If you want to test your custom header, you can use curl:
curl -v -H "Custom-Header: any value that you want" http://www.yourdomain.com/path/to/endpoint
The -H will add the header and -v will show you both the request and response headers that are sent
Related
I'm trying to set up end to end http2 connections on a Amazon Elastic Beanstalk application. I'm using node js and fastify with http2 support (it works great on my local machine). By defaul, the nginx reverse proxy that EB creates on the EC2 instance where the code gets deployed, uses http/1.1, so I need to change that.
I have read here how to do it (see reverse proxy configuration section). The problem is that if you see the nginx.conf file:
#Elastic Beanstalk Nginx Configuration File
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 66982;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}
In the last line include conf.d/elasticbeanstalk/*.conf;, a file 00_application.conf gets included. That file contains the following:
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
And there you can see the proxy_http_version parameter I need to change to 2.0.
Any idea how can I achieve that? I can add configuration files to conf.d and replace the entire nginx.conf file, but I do not really know how to change that value from there.
Create a file with extension .config inside the .ebextension folder for example 01-mynginx.conf
Inside the config file, use the files key to create files on the instance and the container_commands key to run system commands after the application and web server has been set up
files:
“/etc/nginx/conf.d/01-mynginx.conf”:
mode: “000644”
owner: root
group: root
content: |
keepalive_timeout 120s;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
fastcgi_send_timeout 120s;
fastcgi_read_timeout 120s;
container_commands:
nginx_reload:
command: “sudo service nginx reload”
Elastic Beanstalk automatically creates the 01-mynginx.conf file inside /etc/nginx/conf.d folder and is included in the main elastic beanstalk nginx configuration.
I'm working on Dockerizing a webpack application which supports hot-module replacement. Since I added an nginx front-end, I'm having trouble getting the hot-module-replacement to connect. Nginx serves the page, but the js bundle can't connect to the webpack-dev-server running in another Docker container.
The two things I think the problem could be stemming from is a domain resolution problem (between the Docker containers and nginx) or the request is missing the right upgrade / host headers.
The source code for this project is here.
I have two docker containers in this project:
app-webpack - A webpack-dev-server which serves the website
app-nginx - The reverse-proxy
My nginx config files are in docker/nginx.
Ideally, the user goes to localhost, which nginx picks up and redirects to app-webapp:3000. The webpack-dev-server then sends the hot-module-replacement code via the socketjs-node socket address and the page updates locally.
I've confirmed that the app-webpack container can serve a HMR capable page.
Thanks in advance for the help and let me know if there's additional info I can provide!
It took a little bit of tinkering but it turns out I was upgrading the headers incorrectly. For anyone referencing this in the future, check out the linked repo for the full source code.
Here's my etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# The server settings
include /etc/nginx/conf.d/*.conf;
}
And the etc/nginx/conf.d/default.conf/
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
}
server {
server_name localhost;
listen 80;
location /sockjs-node/ {
proxy_pass https://app-webapp:3000/sockjs-node/;
}
location / {
proxy_pass http://app-webapp:3000;
}
}
I have kibana listening on localhost:5601 and if I SSH tunnel to this port I can access kibana in my browser just fine.
I have installed nginx to act as a reverse proxy but having completed the setup all I get is 502 Bad Gateway. The more detailed error in the nginx error log is
*1 upstream prematurely closed connection while reading response header from upstream,
client: 1.2.3.4,
server: elk.mydomain.com,
request: "GET /app/kibana HTTP/1.1",
upstream: "http://localhost:5601/app/kibana"
My nginx config is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.fedora.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
}
My kibana.conf file within /etc/nginx/conf.d/ is:
server {
listen 80 default_server;
server_name elk.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
}
}
This is a brand new Amazon Linux EC2 instance with the latest versions of kibana and nginx installed.
Has anyone encountered this problem before? I feel like it's a simple nginx config problem but I just can't see it.
It turns out that the slashes before the dollars proxy_set_header Upgrade \$http_upgrade; were a result of a copy-paste from another configuration management tool.
I removed the unnecessary slashes to make proxy_set_header Upgrade $http_upgrade; and reclaimed my sanity.
I have an inner server that runs my application. This application runs on port 9001. I want people access this application through nginx which runs on an Ubuntu machine that runs on DMZ network.
I have built nginx from source with the options of sticky and SSL modules. It runs fine but does not do the proxy pass.
The DNS name for the outer IP of the server is: bd.com.tr and I want people to see the page http://bd.com.tr/public/control.xhtml when they enter bd.com.tr but even tough nginx redirects the root request to my desired path, the application does not show up.
My nginx.conf file is:
worker_processes 4;
error_log logs/error.log;
worker_rlimit_nofile 20480;
pid logs/nginx.pid;
events {
worker_connections 1900;
}
http {
include mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
keepalive_timeout 75;
rewrite_log on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 150;
server {
listen 80;
client_max_body_size 300M;
location = / {
rewrite ^ http://bd.com.tr/public/control.xhtml redirect;
}
location /public {
proxy_pass http://BACKEND_IP:9001;
}
}
}
What might I be missing?
It was a silly problem and I found it. The conf file is correct so you can use it if you want and the problem was; The port 9001 of the BACKEND_IP was not forwarded and thus nginx was not able to reach the inner service. After forwarding the port, it worked fine. I found the problem in error.log so if you encounter such problem please check error logs first :)
I have 3 computers on same network(LAN). And I want to configure one computer as Nginx Web-Server, and another as Varnish Cache server and one client . I succesfully installed one(let's say A) Nginx ( 192.168.0.15 ) and B Varnish( 192.168.0.20 ). I configured A as a webserver and I can browse the index.html from other computers. But I couldn't connect it with B.
I messed up with "nginx.conf" and "/sites-available/server.com" and Varnish's "default.vcl"
Could you give me the basic configurations which suit my environment ?
If you want to take a look
My nginx.conf :
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
upstream dynamic_node {
server 1.1.1.1:80; # 1.1.1.1 is the IP of the Dynamic Node
}
server {
listen 81;
server_name myserver.myhome.com;
location / {
#root /var/www/server.com/public_html;
#index index.html index.htm;
# pass the request on to Varnish
proxy_pass http://192.168.0.20;
# Pass a bunch of headers to the downstream server.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
}
/sites-available/server.com :
server {
listen 80;
server_name myserver.myhome.com;
access_log /var/www/server.com/access.log;
error_log /var/www/server.com/error.log;
}
And default.vcl like this :
backend web1 {
.host = "192.168.0.15";
.port = "8080";
}
sub vcl_recv {
if (req.http.host == "192.168.0.15") {
#set req.http.host = "myserver.myhome.com";
set req.backend = web1;
}
}
Lastly /etc/default/varnish :
DAEMON_OPTS="-a :6081 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
Thanks in advance :)
right now, your varnish instance is listening on port 6081. This needs to be specified in the proxy_pass for nginx e.g.
proxy_pass http://192.168.0.20:6081
I am assuming that the ip addresses you mentioned are correct and network connection between the computers is not restricted.
Update
Please bear in mind that you can use nginx in front of varnish or the other way around. Both nginx and varnish can serve as proxies to back end services.
Your current implementation is using nginx as the proxy. This means that you can rely on proxy_pass or use upstream module in nginx (in case you wish to load balance behind with multiple varnish instances with just one nginx in front). Essentially, whichever is the proxy, the ip address and port number for the backend specified in the proxy (nginx in your case) must match the ip address and port number for the backend service (varnish in your case). The backend in varnish would need to match the ip address and port number for whichever application server/service you are using (tomcat/netty/django/ror etc.).