Nginx can't resolve server names when loading them from a file - dictionary

I'm new to nginx and trying to achieve the following behaviour: I want to pass an header to nginx and pass the request to another server based on the given header. I want to load my map from a file so i'm using the following code
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
map $http_dest $new_server{
include port_to_address.map;
default google.com;
}
server {
listen 200;
server_name localhost;
access_log logs/host.access.log main;
location / {
proxy_pass http://$new_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
with the port_to_address.map in the following format
value_1 yahoo.com;
value_2 neflix.com;
With This configuration nginx starts normally but when i pass him an header that exists in the file he returns the following error
2021/04/18 21:52:49 [error] 17352#20504: *3 no resolver defined to resolve yahoo.com, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:200"
When i'm using the ip address and not the server's name it works perfectly. I also got another nginx thats round-robin requests between nodes it reads from a file (with upstream) and there i doesn't get any exception although im using the server's name and not their ip.
I read about using resolver but it doesn't work and i prefer to avoid it. Is there any other way to make it work without using the resolver or the servers ip? (I dont mind changing the map file structure if needed)

Related

NGINX Reverse Proxy Fails with 404 despite being able to curl endpoint

I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting

'connection reset by peer' for large response body by nginx mirror module

I want to copy requests to another backend with ngx_http_mirror_module.
This is my nginx.conf.
Nginx version is 1.19.10
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8080;
mirror /mirror;
}
location /mirror {
internal;
proxy_pass http://127.0.0.1:18080$request_uri;
}
}
}
My Spring applications listen on 8080 and 18080.
The problem is that when the backend server which handles the mirrored request returns a large body response, the backend server throws ClientAbortException because of connection reset by peer.
Nothing is recorded in the nginx error log.
The nginx access log records status 200 for the mirrored request.
Problems tend to occur when the response size is 4k bytes or larger.
Increasing proxy_buffer_size may solve the problem, but if the response size is large (8k bytes or more?), Even if it is smaller than proxy_buffer_size, problems will occur.
I tried to change subrequest_output_buffer_size, but nothing changed.
How can I stop the error?

nginx proxypass TCP 389

We have one "OpenLDAP" server with port 389 currently active,using nginx we want to proxypass this TCP port 389 to TCP based ingress. can any one please share the nginx.conf detail for this.
So far, left with incomplete as per below,
upstream rtmp_servers {
server acme.example.com:389;
}
server {
listen 389;
server_name localhost:389;
proxy_pass rtmp_servers;
proxy_protocol on;
}
Getting an error, any recommendation is appreciated
2021/03/02 09:45:39 [emerg] 1#1: "proxy_pass" directive is not allowed
here in /etc/nginx/conf.d/nginx-auth-tunnel.conf:9 nginx: [emerg]
"proxy_pass" directive is not allowed here in
/etc/nginx/conf.d/nginx-auth-tunnel.conf:9
Your configuration should be in a stream block
You don't need server_name localhost:389;
You are including the configuration from /etc/nginx/conf.d folder which is included inside http block in main nginx.conf file. The stream block should be at the same level as http block. Check the /etc/nginx/nginx.conf for the include and maybe you have to add one for the stream section
This is a sample nginx.conf,
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf; #This include is your problem
}
stream {
upstream rtmp_servers {
server acme.example.com:389;
}
server {
listen 389;
proxy_pass rtmp_servers;
proxy_protocol on;
}
}

Testing load balancing in NGINX

I set up load balancing on NGINX using the Round Robin for apache tomcat servers with two servers in my proxy.conf file:
upstream appcluster1 {
server IP_ADDRESS:8000;
server IP_ADDRESS:8001;
}
server {
location / {
proxy_pass http://appcluster1;
}
}
This is deployed on the cloud and I am able to hit the endpoint using this method successfully. However, I want to test and see if nginx redirects between the two servers. How would I go about this?
I tried this method but I do not see anything in the logs that shows what server it is hitting. Is there any other way I can test and see if nginx would go to the second server?
EDIT: I have another file called nginx.conf that looks like this:
load_module modules/ngx_http_js_module.so;
user nginx;
worker_processes auto;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
js_include auth.js;
proxy_buffering off;
log_format upstreamlog '$server_name to: $upstream_addr {$request} '
'upstream_response_time $upstream_response_time'
' request_time $request_time';
# log_format main '$remote_addr - $remote_user [$time_local] $status '
# '"$request" $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# access_log logs/access.log main;
# sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
proxy_connect_timeout 120s;
keepalive_requests 50;
include /etc/nginx/conf.d/*.conf;
}

Nginx error -Starting nginx: nginx: emerg unknown "status" variable

What I am trying to do using nginx is- to call a backend for authentication and if the response is successful I will redirect to website 1 (for example -google.com) and if authentication fail I will redirect to website2(facebook for example).
Below is my nginx.conf-
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'[===>$status] $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/default.conf;
}
The default.conf file is as below -
server {
listen 80 default_server;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://backend_ip_Address;
set $my_var 0;
if ($status = "200"){
set $my_var 1;
}
#if($status = 4xx) {
# set $my_var 2;
#}
if ($my_var = 1){
proxy_pass http://www.google.com;
}
if ($my_var = 2) {
proxy_pass http://www.facebook.com;
}
}
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The issue I am facing is when I am trying to execute sudo service nginx restart with this configuration I am getting below error-
Starting nginx: nginx: [emerg] unknown "status" variable
The same $status is also present in nginx.conf log configuration and it's logging the response code properly like 301, 200 etc. But the same status variable is not working in default.conf file. Any help on what I am doing wrong?
I tried replacing status with body_bytes_sent header and it's works.
By google search https://www.google.co.in/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=nginx++unknown+%22status%22+variable only related information is https://www.drupal.org/node/2738983 but no much help to resolve this.
status variable is defined on very late phase, after request is processed and response is ready to sent back.
You cannot use it for conditional routing.
Usually it's used for logging.
Here you may read about nginx directives execution order and phases:
https://openresty.org/download/agentzh-nginx-tutorials-en.html

Resources