Nginx proxy - too many redirects - nginx

I dont know whats is wrong. When I try to access to www.befound.com.ar and I get this error on the browser: ERR_TOO_MANY_REDIRECTS
This is my nginx.conf:
#user nobody;
worker_processes 1;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 80;
server_name www.befound.com.ar;
location / {
proxy_pass http://www.befound.com.ar:8090/befound;
}
}
}

I am assuming the service running on port 8090 does not have nginx configuration. So I think you need to change the proxy_pass host to localhost - 127.0.0.1:8090

proxy_set_header Host 'HOST_WHICH_YOU WANT TO PROXY';
will help in 95% :)

Related

nginx backup directive in upstream not working

this is my Nginx config file that is embedded in k8s configMap.
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream backend {
server flask1-service.mohammad-elastic.svc.cluster.local:8080 weight=5;
server flask2-service.mohammad-elastic.svc.cluster.local:8081 backup;
}
upstream test_backend {
server flask2-service.mohammad-elastic.svc.cluster.local:8081;
}
server {
server_name mirroring;
listen 80;
access_log /var/log/nginx/proxy.log;
error_log /var/log/nginx/proxy.error.log info;
location / {
mirror /mirror;
proxy_pass http://backend;
proxy_next_upstream http_404 non_idempotent;
}
location = /mirror {
proxy_pass http://test_backend$request_uri;
}
}
}
when the flask1 ,primary app, is down, flask2 app, as backup, doesn't work.
I mean sometimes the request get result but most of the time it gets 502 bad gateway.

NGINX resolving a non configured domain, why?

I have one server running on: http://localhost:8080
I'm configuring a sample NGINX server.
I copied from internet the following configuration:
# user nobody;
worker_processes 1;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# gzip on;
server
{
listen 80;
server_name mydomain01.com www.mydomain01.com;
location /
{
proxy_pass http://localhost:8080;
include "../proxy_params.conf";
}
}
}
On the hosts file I have just the following entries:
127.0.0.1 mydomain01.com
127.0.0.1 www.mydomain01.com;
127.0.0.1 mydomain02.com
127.0.0.1 www.mydomain02.com;
When I go to: http://mydomain01.com I get the same content as on: http://localhost:8080
My question is:
Why when I go to: http://mydomain02.com I also get the same content as on: http://localhost:8080?
I think I should not get that content because this last domain is not on the NGINX configuration.
Do I have an error on the configuration above?
Thanks!
nginx always contains a default server which will handle requests for server names that do not match a server_name directive. If you do not define a default_server, nginx will use the first server block with a matching location. See this document for details.

Nginx https reverse proxy another 502

Hi I am trying to setup a nginx to work as a reverse proxy to an application that I am running on a tomcat server. when I try to access my application through http it works fine, but when I try to access it over https I am getting a 502 error
here follows my nginx config file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log notice;
gzip on;
gzip_disable "msie6";
rewrite_log on;
server{
ssl on;
listen 80;
listen 443 ssl;
server_name myapp.local;
ssl_certificate max.local.crt;
ssl_certificate_key server.key;
#ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers RC3:HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
keepalive_timeout 60;
error_log /var/log/nginx/hybris.log;
rewrite_log on;
set $my_port 9001;
set $my_protocol "http";
if ($scheme = https){
set $myport 9002;
set $my_protocol "https";
}
location / {
if ( $http_user_agent ~ "Chrome"){
#just a proof of concept
return 301 http://$host/AE/en;
}
if ( $http_user_agent ~ "Firefox"){
#just a proof of concept
return 301 http://google.com/;
}
}
location /AE/en {
proxy_pass $scheme://10.0.2.2:$my_port;
proxy_set_header Host $host;
}
location ~(?:/..)?/_ui/(.*) {
proxy_pass http://10.0.2.2:9001/_ui/$1;
proxy_set_header Host $host;
}
}
}
When using https you are changing the port and also scheme for connecting to the tomcat server - this does not really make sense. You would only use https for a backend server if it is in another datacenter, not within a local network. It should work fine if you remove the $my_port and $my_protocol definitions and change your /AE/en location block to
location /AE/en {
proxy_pass http://10.0.2.2:9001;
proxy_set_header Host $host;
}
I think you need to create two server sections. One for listening on port 80 and the other for listening on port 453 which is for https.

How to enable nginx status page?

I need the IP/nginx_status page for my check_nginx_status Nagios plugin. I followed some instructions:
nginx -V | grep --color -o http_stub_status #some HttpStubStatusModule verification
In nginx.conf, I added:
http {
...
server{
location /nginx_status {
stub_status on;
access_log off;
allow MY_IP;
deny all;
}
}
...
}
After nginx reload, the page should be available.
But, I get "The page you were looking for doesn't exist."
I have nginx v1.6.0.
nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
listen 80;
location /nginx_status {
stub_status on;
access_log off;
#allow 107.170.106.199;
#deny all;
}
}
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

nginx does not resolve upstream

I have two AP server, and I want to setup NGINX as a proxy server and load balancer.
here is my nginx.conf file:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
large_client_header_buffers 8 1024k;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 650;
send_timeout 2000;
proxy_connect_timeout 2000;
proxy_send_timeout 2000;
proxy_read_timeout 2000;
gzip on;
#
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
upstream backend {
server apserver1:8443;
server apserver2:8443;
}
server {
listen 8445 default ssl;
server_name localhost;
client_max_body_size 500M;
client_body_buffer_size 128k;
underscores_in_headers on;
ssl on;
ssl_certificate ./crt/server.crt;
ssl_certificate_key ./crt/server.key;
location / {
proxy_pass https://backend;
break;
}
}
}
apserver1 and apserver2 are my AP server and in fact they are IP address.
when I visit the nginx via https://my.nginx.server:8445, I can get the AP container's default page. In my case, it is the JETTY server default page. that means the NGINX works.
if anything going correctly, user accessing to https://my.nginx.server:8445/myapp will get the log in page. if user has logged in, my app will redirect the user to https://my.nginx.server:8445/myapp/defaultResource.
when I visit via https://my.nginx.server:8445/myapp as a NOT-logged-in user, I can get the log in page correctly.
when I visit via https://my.nginx.server:8445/myapp/defaultResource directly as a logged-in user, I can get the correct page.
but when I visit the url https://my.nginx.server:8445/myapp as a logged-in user, (if correctly, the URL should be redirect to https://my.nginx.server:8445/myapp/defaultResource), but the nginx translate the URL to https://backend/myapp/defaultResource, and Chrome give me the following error:
The server at backend can't be found, because the DNS lookup failed....(omited)
nginx, seems not resolve the upstream backend. what's wrong with my configuration?
AND if I use http instead of https, everything goes well.
any help is appreciated.
Try to add the "resolver" directive to your configuration:
http://nginx.org/r/resolver

Resources