I have 3 computers on same network(LAN). And I want to configure one computer as Nginx Web-Server, and another as Varnish Cache server and one client . I succesfully installed one(let's say A) Nginx ( 192.168.0.15 ) and B Varnish( 192.168.0.20 ). I configured A as a webserver and I can browse the index.html from other computers. But I couldn't connect it with B.
I messed up with "nginx.conf" and "/sites-available/server.com" and Varnish's "default.vcl"
Could you give me the basic configurations which suit my environment ?
If you want to take a look
My nginx.conf :
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
upstream dynamic_node {
server 1.1.1.1:80; # 1.1.1.1 is the IP of the Dynamic Node
}
server {
listen 81;
server_name myserver.myhome.com;
location / {
#root /var/www/server.com/public_html;
#index index.html index.htm;
# pass the request on to Varnish
proxy_pass http://192.168.0.20;
# Pass a bunch of headers to the downstream server.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
}
/sites-available/server.com :
server {
listen 80;
server_name myserver.myhome.com;
access_log /var/www/server.com/access.log;
error_log /var/www/server.com/error.log;
}
And default.vcl like this :
backend web1 {
.host = "192.168.0.15";
.port = "8080";
}
sub vcl_recv {
if (req.http.host == "192.168.0.15") {
#set req.http.host = "myserver.myhome.com";
set req.backend = web1;
}
}
Lastly /etc/default/varnish :
DAEMON_OPTS="-a :6081 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
Thanks in advance :)
right now, your varnish instance is listening on port 6081. This needs to be specified in the proxy_pass for nginx e.g.
proxy_pass http://192.168.0.20:6081
I am assuming that the ip addresses you mentioned are correct and network connection between the computers is not restricted.
Update
Please bear in mind that you can use nginx in front of varnish or the other way around. Both nginx and varnish can serve as proxies to back end services.
Your current implementation is using nginx as the proxy. This means that you can rely on proxy_pass or use upstream module in nginx (in case you wish to load balance behind with multiple varnish instances with just one nginx in front). Essentially, whichever is the proxy, the ip address and port number for the backend specified in the proxy (nginx in your case) must match the ip address and port number for the backend service (varnish in your case). The backend in varnish would need to match the ip address and port number for whichever application server/service you are using (tomcat/netty/django/ror etc.).
Related
I am trying to install geoip module for nginx though dockerfile by adding to my dockerfile the following:
RUN apk add --no-cache libmaxminddb nginx-mod-http-geoip
RUN cd /var/lib; \
mkdir -p nginx; \
wget -q -O- https://dl.miyuru.lk/geoip/maxmind/country/maxmind.dat.gz | gunzip -c > nginx/maxmind-country.dat; \
wget -q -O- https://dl.miyuru.lk/geoip/maxmind/city/maxmind.dat.gz | gunzip -c > nginx/maxmind-city.dat; \
chown -R nginx. nginx
COPY nginx.conf /etc/nginx/nginx.conf
The nginx.config is the following:
load_module "modules/ngx_http_geoip_module.so";
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events{worker_connections 1024;
}
# See blow link for Creating NGINX Plus and NGINX Configuration Files
# https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format kv 'site="$server_name" server="$host" dest_port="$server_port" dest_ip="$server_addr" '
'src="$remote_addr" src_ip="$realip_remote_addr" user="$remote_user" '
'time_local="$time_local" protocol="$server_protocol" status="$status" '
'bytes_out="$bytes_sent" bytes_in="$upstream_bytes_received" '
'http_referer="$http_referer" http_user_agent="$http_user_agent" '
'nginx_version="$nginx_version" http_x_forwarded_for="$http_x_forwarded_for" '
'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" '
'http_method="$request_method" response_time="$upstream_response_time" '
'cookie="$http_cookie" request_time="$request_time" category="$sent_http_content_type" https="$https"'
'geoip_country_name="$geoip_country_name"';
access_log /var/log/nginx/access.log kv;
sendfile on;
keepalive_timeout 65;
geoip_country /var/lib/nginx/maxmind-country.dat;
geoip_city /var/lib/nginx/maxmind-city.dat;
include /etc/nginx/conf.d/*.conf;
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream backend {
# dashboard is the internal DNS name used by the backend Service inside Kubernetes
server localhost:5005;
}
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
resolver 127.0.0.11; #nginx will not crash if host is not found
# The following statement will proxy traffic to the upstream
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
However, when I am inspecting the logs I am getting
geoip_country_name = "-"
Any idea of what is going wrong here? Could it be that I am running this locally?
The "-" is what the logfile uses when the value is empty. GeoIP uses the $remote_addr to calculate the source of the request.
172.17.0.1 is not a public IP address, it is an internal address of one of your proxy servers. Check the $http_x_forwarded_for header value for the real remote address (assuming your reverse proxy servers are configured correctly.
The Geoip module provides the geoip_proxy directive to ignore $remote_addr and use $http_x_forwarded_for instead.
For example (added to your other geoip_ directives):
geoip_proxy 172.17.0.1;
We were experiencing a similar problem.
It essentially came back to the points made by #RichardSmith, however in our case the following configuration resolved the problem:
geoip_proxy 0.0.0.0/0;
We are running a nodejs HTTP server on port 8090 on Amazon EC2 instance 1
We are running NGINX on port 80 at Amazon EC2 instance 2
In the NGINX we have configured upstream for our NodeJS Server
Now we are unable to get socket.io/socket.io.js from my machine using IP of the EC2 machine where NGINX is running.
Yes i have configured the inbout / outbound policy for port 8090
NGINX IP :- 51.122.71.253 (sample IP)
EC2 INSTANCE 1 IP :- 5x.18x.8x.24x
The Problem:
I am unable to get http://51.122.71.253/socket.io/socket.io.js from my local machine via NGINX.
But i can access the file from EC2 instance 1 directly http://5x.18x.8x.24x:8090/socket.io/socket.io.js
Same setup with same NGINX.conf is working in our local LAN.
Is there any special trick with the EC2 for port 80?
Configuration File
http
{
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
map $http_upgrade $connection_upgrade
{
default upgrade;
'' close;
}
upstream node_server
{
ip_hash;
server 5x.18x.8x.24x:8090;
}
server
{
listen 80;
location /
{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_read_timeout 3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://node_server;
client_body_in_file_only clean;
client_body_buffer_size 32K;
client_max_body_size 400M;
sendfile on;
send_timeout 300s;
}
}
What we are trying to achieve:-
Actually on our client Side Network all the ports Except 80 is blocked , so we are trying NGINX as proxy to redirect socket connection from port 80 to our port 8090.
(Client Machine) -> (NGINX PROXY) -> (EC2 INSTANCE NODEJS SERVER RUNINNG # 8090)
I have solved the problem. You have to change the port from 80 to any other port in the /etc/nginx/conf.d/default.conf.
I have changed the port to 81.
server {
listen 81;
server_name localhost;
Then please restart the NGINX and it will work.
I have a java spring application running on port 8080, this app should return a header 'x-auth-token', this app run behind nginx reverse proxy.
The application correctly produce the header, When i request directly to it (bypassing nginx):
http://169.54.76.123:8080
it responds with the header in the set of response headers
but when i make the request through the nginx reverse proxy, the header does not appear
https://169.54.76.123
nginx handles ssl termination.
my nginx conf file
upstream loadbalancer {
server 169.54.76.123:8080 ;
}
server {
listen 169.54.76.123:80;
server_name api.ecom.com;
return 301 https://api.ecom.com$request_uri;
}
server {
listen 169.54.76.123:443 ssl;
keepalive_timeout 70;
server_name api.ecom.com;
ssl_certificate /etc/ssl/api_chained.cert ;
ssl_certificate_key /etc/ssl/api.key ;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 SSLv3 SSLv2;
ssl_ciphers ALL:!ADH:RC4+RSA:HIGH:!aNULL:!MD5;
charset utf-8;
location / {
proxy_pass http://loadbalancer/$request_uri ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The question is: Why nGinx do not pass the 'x-auth-token' to the response?
How to include it into the response?
I tried to get the value in a variable, but it seems that nGinx do not have it:
I used $sent_http_x_auth_token and $upstream_htto_x_auth_token but these variables does not contain any values (i think)
I tried adding the header myself using these variables:
add_header x-auth-token $sent_http_x_auth_token; with no success
also tried:
add_header x-auth-token $upstream_http_x_auth_token; with no success either.
also, I tried:
proxy_pass_header x-auth-token;
with no success
What is the problem? How can i debug it?
which part prevents or blocks the 'x-auth-header'? the upstream or the proxy or what?
Thanks for any help
Normally you should have to do anything because nginx does not remove custom headers from the response.
You could use the log_format directive to track the execution of the request, for instance with something like
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent '
'X-Auth-Token: "$sent_http_x_auth_token"';
access_log logs/access.log main;
in you location context.
When you check, do you get a 200 response code that confirms the request succeeded?
The sent_ prefix should not be there.
The correct way to log customer headers is to prefix them with http_ and then write the customer header all lowercase and convert dashes (-) to underscores (_). For example 'Custom-Header' would be http_custom_header
So the correct way to log this is:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_x_auth_token"';
access_log logs/access.log main;
Also please be aware that if your header contains underscores you will need to explicitly allow this by adding this in your nginx config:
underscores_in_headers on
If you want to test your custom header, you can use curl:
curl -v -H "Custom-Header: any value that you want" http://www.yourdomain.com/path/to/endpoint
The -H will add the header and -v will show you both the request and response headers that are sent
There are 3 ingredients to this issue:
Docker container: I have a Docker container that is deployed on an EC2 instance. More specifically, I have the rocker/shiny image, which I have run using:
sudo docker run -d -v /home/ubuntu/projects/shiny_example:/srv/shiny-server -p 3838:3838 rocker/shiny
Shiny server: The standard Shiny server configuration file is untouched, and is set up to serve everything in the /srv/shiny-server folder on port 3838, and the contents of my local ~/projects/shiny_example are mapped to the container's /srv/shiny-server/.
In my local ~/projects/shiny_example, I have cloned a random Shiny app:
git clone https://github.com/rstudio/shiny_example
nginx: I have set up nginx as a reverse proxy and here are the contents of the /etc/nginx/nginx.conf in its entirety.
The issue is that with this setup, when I try to retrieve http://<ip-address>/shiny/shiny_example, I get a 404. The main clue I have as to what might be wrong is that when I do a:
wget http://localhost:3838/shiny_example
from the command line on my EC2 instance, I get:
--2016-06-13 11:05:08-- http://localhost:3838/shiny_example
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:3838... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: /shiny_example/ [following]
--2016-06-13 11:05:08-- http://localhost:3838/shiny_example/
Reusing existing connection to localhost:3838.
HTTP request sent, awaiting response... 200 OK
Length: 3136 (3.1K) [text/html]
Saving to: ‘shiny_example.3’
100%[==============================================================================================================================>] 3,136 --.-K/s in 0.04s
2016-06-13 11:05:09 (79.6 KB/s) - ‘shiny_example.3’ saved [3136/3136]
where the emphasis is mine.
I think that my nginx configuration does not account for the fact that when requesting a Docker mapped port, there is a 301 redirect. I think that the solution involves proxy_next_upstream, but I would appreciate some help in trying to set this up in my context.
I also think that this question can be shorn of the Docker context, but it would be nice to understand how to prevent a 301 redirect when requesting a resource from Shiny server that is in a Docker container, and whether this behavior is expected.
I can't be sure without more output, but suspect your error is in your proxy_redirect line:
location /shiny/ {
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/shiny_example;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
Try changing that to:
location /shiny/ {
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/shiny/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
The reason for that is when the 301 header comes back from "http://localhost:3838" to add the trailing slash, it gets rewritten to "http://localhost/shiny_example" which doesn't exist in your nginx config, plus it may also remove a slash from the path. This means the 301 from "http://localhost:3838/shiny_example" to "http://localhost:3838/shiny_example/" would get rewritten to to "http://localhost/shiny_exampleshiny_example/", at which point you get a 404.
There was nothing wrong with anything. Basically, one of the lines in /etc/nginx/nginx.conf was include /etc/nginx/sites-enabled/*, which was pulling in the default file for enabled sites, which has the following lines:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
which was overwriting my listen directives for port 80 and for location /. Commenting out the include directive for the default conf file for enabled sites in the /etc/nginx/nginx.conf file resolved all issues for me.
Not sure if this is still relevant but I have a minimal example here: https://github.com/mRcSchwering/butterbirne
A service shinyserver (which is based on rocker/shiny) is started with a service webserver (based on nginx:latest):
version: '2'
services:
shinyserver:
build: shinyserver/
webserver:
build: webserver/
ports:
- 80:80
I configured the ngin, so that it would redirect directly to the shiny server root. In my case I added the app (called myapp here) as the root of shinyserver (so no /myapp is needed). This is the whole nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# apparently this is needed for shiny server
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# proxy shinyserver
server {
listen 80;
location / {
proxy_pass http://shinyserver:3838;
proxy_redirect http://shinyserver:3838/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
}
}
I have an inner server that runs my application. This application runs on port 9001. I want people access this application through nginx which runs on an Ubuntu machine that runs on DMZ network.
I have built nginx from source with the options of sticky and SSL modules. It runs fine but does not do the proxy pass.
The DNS name for the outer IP of the server is: bd.com.tr and I want people to see the page http://bd.com.tr/public/control.xhtml when they enter bd.com.tr but even tough nginx redirects the root request to my desired path, the application does not show up.
My nginx.conf file is:
worker_processes 4;
error_log logs/error.log;
worker_rlimit_nofile 20480;
pid logs/nginx.pid;
events {
worker_connections 1900;
}
http {
include mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
keepalive_timeout 75;
rewrite_log on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 150;
server {
listen 80;
client_max_body_size 300M;
location = / {
rewrite ^ http://bd.com.tr/public/control.xhtml redirect;
}
location /public {
proxy_pass http://BACKEND_IP:9001;
}
}
}
What might I be missing?
It was a silly problem and I found it. The conf file is correct so you can use it if you want and the problem was; The port 9001 of the BACKEND_IP was not forwarded and thus nginx was not able to reach the inner service. After forwarding the port, it worked fine. I found the problem in error.log so if you encounter such problem please check error logs first :)