Unable to detect proxy on new server - nginx

I use the following code to detect if users are behind proxy/vpn:
function checkUser()
{
$proxy = null;
$check = null;
$proxy = ($_SERVER['HTTP_ACCEPT_ENCODING'] != 'gzip, deflate') ? true : false;
if(empty($_SERVER['HTTP_CONNECTION']) || strtolower($_SERVER['HTTP_CONNECTION']) != 'keep-alive' || $_SERVER['HTTP_CACHE_CONTROL'] != 'max-age=0')
{
$check = ($proxy === true) ? 'proxy' : 'vpn';
}
return $check;
}
$connection = checkUser();
switch($connection)
{
case 'proxy': $var = 'It seems you are behind Proxy.'; break;
case 'vpn': $var = 'It seems you are using VPN.'; break;
default: $var = 'No Proxy or VPN detected.'; break;
}
echo $var;
However it does work just fine on an older server I have, but on the new one it just doesn't. The new server is using Reverse Proxy Server (nginx). Can someone tell me if it has something to do with nginx and what I should adjust at the config. Thanks!
--- EDIT: ---
#user nginx;
worker_processes 4;
worker_rlimit_nofile 950000;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
events {
worker_connections 45000;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}

Your code seems to be based on the assumption that a proxy would disable the use of gzip/deflate and/or keep-alive sessions. This is not an accurate assumption. It is easier to implement a proxy if these features are turned off, but per the spec there is nothing that precludes a proxy working correctly with these features on.
So yes, nginx is probably just a better proxy and so the assumptions made by the author of that code above are now wrong.
The right way to check for an HTTP proxy is to look for the presence of an X-Forwarded-For header. Something like this would suffice:
function isProxied() {
$headers = array_change_key_case(apache_request_headers());
return isset($headers["x-forwarded-for"]);
}
Technically, a proxy can be implemented without advertising it's presense (without adding an X-Forwarded-For header) - and some have an option to do this, in which case you're not really going to be able to detect this. But most propxies will cooperate with you.
Note that if you are using a proxy in your own server stack (i.e. if you are running Varnish, Nginx or something else in front of Apache) then that could also adding an X-Forwarded-For header, so everything would appear to be proxied (based on this, it looks like nginx uses "X-Real-IP" by default, so you likely don't need to worry about this). If this is the case, either turn off the that option in Varnish/Nginx/whatever, or parse the X-Forwarded-For header to see if there are two IPs there instead of one.
Regarding VPN connections, I do not think you are going to find a reliable way to detect if the user is on VPN just from an incoming HTTP connection. Although you might consider checking his origin IP to see if he is coming from a known TOR address or something of the sort. Depending on how much you care about that.

Related

NGINX Reverse Proxy Fails with 404 despite being able to curl endpoint

I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting

'connection reset by peer' for large response body by nginx mirror module

I want to copy requests to another backend with ngx_http_mirror_module.
This is my nginx.conf.
Nginx version is 1.19.10
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8080;
mirror /mirror;
}
location /mirror {
internal;
proxy_pass http://127.0.0.1:18080$request_uri;
}
}
}
My Spring applications listen on 8080 and 18080.
The problem is that when the backend server which handles the mirrored request returns a large body response, the backend server throws ClientAbortException because of connection reset by peer.
Nothing is recorded in the nginx error log.
The nginx access log records status 200 for the mirrored request.
Problems tend to occur when the response size is 4k bytes or larger.
Increasing proxy_buffer_size may solve the problem, but if the response size is large (8k bytes or more?), Even if it is smaller than proxy_buffer_size, problems will occur.
I tried to change subrequest_output_buffer_size, but nothing changed.
How can I stop the error?

Testing load balancing in NGINX

I set up load balancing on NGINX using the Round Robin for apache tomcat servers with two servers in my proxy.conf file:
upstream appcluster1 {
server IP_ADDRESS:8000;
server IP_ADDRESS:8001;
}
server {
location / {
proxy_pass http://appcluster1;
}
}
This is deployed on the cloud and I am able to hit the endpoint using this method successfully. However, I want to test and see if nginx redirects between the two servers. How would I go about this?
I tried this method but I do not see anything in the logs that shows what server it is hitting. Is there any other way I can test and see if nginx would go to the second server?
EDIT: I have another file called nginx.conf that looks like this:
load_module modules/ngx_http_js_module.so;
user nginx;
worker_processes auto;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
js_include auth.js;
proxy_buffering off;
log_format upstreamlog '$server_name to: $upstream_addr {$request} '
'upstream_response_time $upstream_response_time'
' request_time $request_time';
# log_format main '$remote_addr - $remote_user [$time_local] $status '
# '"$request" $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# access_log logs/access.log main;
# sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
proxy_connect_timeout 120s;
keepalive_requests 50;
include /etc/nginx/conf.d/*.conf;
}

nginx and uwsgi: large difference between upstream response time and request time

Disclaimer: this is technically related to a school project, but I've talked to my professor and he is also confused by this.
I have a nginx load balancer reverse proxying to several uwsgi + flask apps. The apps are meant to handle very high throughput/load. My response times from uwsgi are pretty good, and the nginx server has low CPU usage and load average, but the overall request time is extremely high.
I've looked into this issue and all the threads I've found say that this is always caused by the client having a slow connection. However, the requests are being made by a script on the same network, and this issue isn't affecting anyone else's setup, so it seems to me that it's a problem with my nginx config. This has me totally stumped though because it seems almost unheard of for nginx to be the bottleneck like this.
To give an idea of the magnitude of the problem, there are three primary request types: add image, search, and add tweet (it's a twitter clone).
For add image, the overall request time is ~20x longer than the upstream response time on average. For search, it's a factor of 3, and add tweet 1.5. My theory for the difference here is that the amount of data being sent back and forth is much larger for add image than either search or add tweet, and larger for search than add tweet.
My nginx.conf is:
user www-data;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 30000;
events {
worker_connections 30000;
}
http {
# Settings.
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_buffer_size 200K;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Logging
log_format req_time '$remote_addr - $remote_user [$time_local] '
'REQUEST: $request '
'STATUS: $status '
'BACK_END: $upstream_addr '
'UPSTREAM_TIME: $upstream_response_time s '
'REQ_TIME: $request_time s ';
'CONNECT_TIME: $upstream_connect_time s';
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log req_time;
# GZIP business
gzip on;
gzip_disable "msie6";
# Routing.
upstream media {
server 192.168.1.91:5000;
}
upstream search {
least_conn;
server 192.168.1.84:5000;
server 192.168.1.134:5000;
}
upstream uwsgi_app {
least_conn;
server 192.168.1.85:5000;
server 192.168.1.89:5000;
server 192.168.1.82:5000;
server 192.168.1.125:5000;
server 192.168.1.86:5000;
server 192.168.1.122:5000;
server 192.168.1.128:5000;
server 192.168.1.131:5000;
server 192.168.1.137:5000;
}
server {
listen 80;
server_name localhost;
location /addmedia {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /media {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /search {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass search;
}
location /time-search {
rewrite /time-search(.*) /times break;
include uwsgi_params;
uwsgi_pass search;
}
location /item {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
if ($request_method = DELETE) {
uwsgi_pass media;
}
if ($request_method = GET) {
uwsgi_pass uwsgi_app;
}
if ($request_method = POST) {
uwsgi_pass uwsgi_app;
}
}
location / {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass uwsgi_app;
}
}
}
And my uwsgi ini is:
[uwsgi]
chdir = /home/ubuntu/app/
module = app
callable = app
master = true
processes = 25
socket = 0.0.0.0:5000
socket-timeout = 5
die-on-term = true
home = /home/ubuntu/app/venv/
uid = 1000
buffer-size=65535
single-interpreter = true
Any insights as to the cause of this problem would be greatly appreciated.
So, I think I figured this out. From reading the nginx docs (https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/) it seems that there are three metrics to pay attention to: upstream_response_time, request_time, and upstream_connect_time. I was focused on the difference between upstream_response_time and request_time.
However, upstream_response_time is the time between the upstream accepting the request and returning a response. It doesn't include upstream_connect time, or the time it takes to establish a connection to upstream server. And in the context of uwsgi, this is very important, because if there isn't a worker available to accept a request, the request will get put on a backlog. I think the time a request waits on the backlog might count as upstream_connect_time, not upstream_response_time in nginx, because uwsgi hasn't read any of the bytes yet.
Unfortunately, I can't be 100% certain, because I never got a "slow" run where I was logging upstream_connect_time. But the only things I changed that improved my score were just "make the uwsgi faster" changes (devote more cores to searching, increase replication factor in the DB to make searches faster)... So yeah, turns out the answer was just to increase throughput for the apps.

nginx does not resolve upstream

I have two AP server, and I want to setup NGINX as a proxy server and load balancer.
here is my nginx.conf file:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
large_client_header_buffers 8 1024k;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 650;
send_timeout 2000;
proxy_connect_timeout 2000;
proxy_send_timeout 2000;
proxy_read_timeout 2000;
gzip on;
#
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
upstream backend {
server apserver1:8443;
server apserver2:8443;
}
server {
listen 8445 default ssl;
server_name localhost;
client_max_body_size 500M;
client_body_buffer_size 128k;
underscores_in_headers on;
ssl on;
ssl_certificate ./crt/server.crt;
ssl_certificate_key ./crt/server.key;
location / {
proxy_pass https://backend;
break;
}
}
}
apserver1 and apserver2 are my AP server and in fact they are IP address.
when I visit the nginx via https://my.nginx.server:8445, I can get the AP container's default page. In my case, it is the JETTY server default page. that means the NGINX works.
if anything going correctly, user accessing to https://my.nginx.server:8445/myapp will get the log in page. if user has logged in, my app will redirect the user to https://my.nginx.server:8445/myapp/defaultResource.
when I visit via https://my.nginx.server:8445/myapp as a NOT-logged-in user, I can get the log in page correctly.
when I visit via https://my.nginx.server:8445/myapp/defaultResource directly as a logged-in user, I can get the correct page.
but when I visit the url https://my.nginx.server:8445/myapp as a logged-in user, (if correctly, the URL should be redirect to https://my.nginx.server:8445/myapp/defaultResource), but the nginx translate the URL to https://backend/myapp/defaultResource, and Chrome give me the following error:
The server at backend can't be found, because the DNS lookup failed....(omited)
nginx, seems not resolve the upstream backend. what's wrong with my configuration?
AND if I use http instead of https, everything goes well.
any help is appreciated.
Try to add the "resolver" directive to your configuration:
http://nginx.org/r/resolver

Resources