Cache some API requests in Nginx - nginx

I'm seeking advise from experts here.
We have the following scenario. We have a java application. Java app is running on tomcat7. tomcat7 acting as API server. User interface files ( Static html and css ) are served by nginx. Nginx is acting as reverse proxy here. All API request are passed to API server and rest are being server by nginx directly.
What we want is to implement cache mechanism here. That is means we want to enable cache for all but with few exception. We want to exclude some API requests from being cached.
Our configuration is like as shown below
server {
listen 443 ssl;
server_name ~^(?<subdomain>.+)\.ourdomain\.com$;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
if ($request_method !~ ^(GET|HEAD|POST)$ )
{
return 405;
}
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
location / {
root /var/www/html/userUI;
location ~* \.(?:css|js)$ {
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
}
location /server {
proxy_pass http://upstream/server;
proxy_set_header Host $subdomain.ourdomain.com;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_redirect off;
proxy_cache sd6;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache_bypass $http_cache_control;
}
ssl on;
ssl_certificate /etc/nginx/ssl/ourdomain.com.bundle.crt;
ssl_certificate_key /etc/nginx/ssl/ourdomain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_session_timeout 24h;
keepalive_timeout 300;
As above, we use cache only for static files located in /var/www/html/userUI
We want to implement as such in location /server. This our api server. Means nginx passes api request to tomcat7 ( upstream ) server. We want to enable cache for specific API requests only but need to disable cache for rest of all requests.
We want to do the following
Exclude all json requests from cache and but need to enable cache for few.
Request url will be something like as shown below
Request URL:https://ourdomain.com/server/user/api/v7/userProfileImage/get?loginName=user1&_=1453442399073
What this url does is to get the Profile image. We want to enable cache for this specific url. So condition we would like to use is , if request url contains "/userProfileImage/get" we want to set cache and all other requests shouldn't cache.
To achieve this we changed the settings to following
location /server {
set $no_cache 0;
if ($request_uri ~* "/server/user/api/v7/userProfileImage/get*")
{
set $no_cache 1;
}
proxy_pass http://upstream/server;
proxy_set_header Host $subdomain.ourdomain.com;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_redirect off;
proxy_cache sd6;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
}
Below are the results of http responses
General :
Request URL:https://ourdomain.com/server/common/api/v7/userProfileImage/get?loginName=user1
Request Method:GET
Status Code:200 OK
Remote Address:131.212.98.12:443
Response Headers :
Cache-Control:no-cache, no-store, must-revalidate
Connection:keep-alive
Content-Type:image/png;charset=UTF-8
Date:Fri, 22 Jan 2016 07:36:56 GMT
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Pragma:no-cache
Server:nginx
Transfer-Encoding:chunked
X-Proxy-Cache:MISS
Please advise us a solution.

Related

Scalling flask socket io with redis connecting to only one server port

I have real time chatapplication, unable to scale them.
so i have frontend ngnix which proxy pass to backend ngnix , now backend ngnix will upstream to flasksocket io multiple instance,
server {
listen 443 ssl;
#server_name xx.xxx.xx.xxx;
client_max_body_size 300M;
ssl_certificate /etc/nginx/ssl/newssl/ssl_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/newssl/example.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'xxx+xxxx+xxxx';
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Real-IP;
real_ip_recursive on;
add_header X-Frame-Options "SAMEORIGIN" ;
add_header Access-Control-Allow-Origin *;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 65s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
client_max_body_size 0M;
proxy_pass http://live_chatnodes;
backend ngnix
server {
listen 80;
server_name 10.0.2.9;
client_max_body_size 300M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 65s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
client_max_body_size 0M;
proxy_pass http://live_nodes;
}
this now pass request to upstream and upstream has 15 nodes, how ever all the request are going to one flasksocket io. which is causing my system go down. and last info upstream has iphash as per flaskcoketio documentation.
all my clients are using vpn so they have static ip as well,

NGINX Reverse Proxy (Proxy Manager) to OpenResty (NGINX) API Issues

I am hosting an application (Tandoor) on a LXC container.
On my host I have another container with a NginxProxyManager installed.
The Proxy Manager forwards (and forces SSL) to all incoming connections to the correct container, this works perfectly for all my other applications.
I am really struggling to understand why Tandoor will not work.
My config on the Reverse Proxy:
location /authelia {
internal;
set $upstream_authelia http://10.10.10.4:9091/api/verify;
proxy_pass_request_body off;
proxy_pass $upstream_authelia;
proxy_set_header Content-Length "";
# Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
client_body_buffer_size 128k;
proxy_set_header Host $http_host;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Uri $request_uri;
proxy_set_header X-Forwarded-Ssl on;
proxy_redirect http:// $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 4 32k;
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;
}
location / {
set $upstream_app $forward_scheme://$server:$port;
proxy_pass $upstream_app;
auth_request /authelia;
auth_request_set $target_url $scheme://$http_host$request_uri;
auth_request_set $user $upstream_http_remote_user;
auth_request_set $groups $upstream_http_remote_groups;
proxy_set_header Remote-User $user;
proxy_set_header Remote-Groups $groups;
error_page 401 =302 https://auth.named-lab.net/?rd=$target_url;
client_body_buffer_size 128k;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
send_timeout 5m;
proxy_read_timeout 360;
proxy_send_timeout 360;
proxy_connect_timeout 360;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Uri $request_uri;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header X-Forwarded-Ssl on;
proxy_redirect http:// $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 64 256k;
set_real_ip_from 192.168.4.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
}
location /accounts/logout/ {
return 301 https://auth.named-lab.net/logout;
}
My config on the OperResty (Tandoor) server:
server {
server_name tandoor.named-lab.net;
listen 8002;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
#add_header Content-Security-Policy "default-src 'self';";
#add_header 'Access-Control-Allow-Origin' '*';
#proxy_set_header 'Content-Security-Policy' 'upgrade-insecure-requests';
#proxy_set_header 'Access-Control-Allow-Origin' '*';
#proxy_set_header Content-Security-Policy "default-src 'self';";
# serve media files
location /static {
alias /var/www/recipes/staticfiles;
}
location /media {
alias /var/www/recipes/mediafiles;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_pass http://unix:/home/recipes/recipes.sock;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header 'Content-Security-Policy' "default-src 'self';";
proxy_set_header 'Content-Security-Policy' 'upgrade-insecure-requests';
}
}
I can see the interface load fine, and the auth system works great too. However as soon as I do anything that involves the API (https://tandoor.named-lab.net/api/*) the data doesn't load.
If I got directly to https://tandoor.named-lab.net//api/ the JSON data shows without issue...

Nginx proxy_cache miss if URI has slash

My nginx location block is:
location ^~ /get/preview {
add_header X-Proxy-Cache $upstream_cache_status;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_ignore_headers Cache-Control Set-Cookie;
proxy_ssl_protocols TLSv1.3;
proxy_ssl_session_reuse on;
proxy_cache upstream;
proxy_cache_key $scheme$host$uri$is_args$args;
proxy_cache_methods GET HEAD;
proxy_cache_min_uses 0;
proxy_cache_valid 200 301 302 1h;
proxy_cache_use_stale updating;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_pass https://tar.backend.com;
}
This will be a HIT after the 1st request:
https://example.com/get/preview?fileId=17389&x=256&y=256&a=true&v=5fe320ede1bb5
This is always a MISS:
https://example.com/get/preview.png?file=/zedje/118812514_3358890630894241_5001264763560347393_n.jpg&c=5fe3256d45a8c&x=150&y=150
You should check "Expires" header from your upstream. Documentation said "parameters of caching may be set in the header fields “Expires” or “Cache-Control”."
Another option - maybe you have another location for .(png|jpg|css|js)$ files with different options.

How to increase nginx timeout for upstream uWSGI server?

Stack used:
Nginx -> Uwsgi (proxy passed) -> Django
I have an API that takes aroundn 80 seconds to execute a query. Nginx closes the connection with the upstream server after 60 seconds. This was found in the nginx error log:
upstream prematurely closed connection while reading response header from upstream
The uWSGI and django application logs do not show anything weird.
This is my nginx configuration:
server {
listen 80;
server_name xxxx;
client_max_body_size 10M;
location / {
include uwsgi_params;
proxy_pass http://127.0.0.1:8000;
proxy_connect_timeout 10m;
proxy_send_timeout 10m;
proxy_read_timeout 10m;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_hide_header Vary;
proxy_set_header Accept-Encoding '';
proxy_ignore_headers Cache-Control Expires;
proxy_set_header Referer $http_referer;
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
How do I increase the timeout, I have tried settings the proxy_pass timeout variables but they do no seem to be working.
Okay, so managed to solve this issue by replacing proxy_pass with uwsgi_pass
This is how my nginx conf looks now:
server {
listen 80;
server_name xxxxx;
client_max_body_size 4G;
location /static/ {
alias /home/rmn/workspace/mf-analytics/public/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi_web.sock;
uwsgi_read_timeout 600;
}
}
And I had to set the socket parameter in my uwsgi ini file.
For some reason, the proxy_pass timeouts just wouldnt take effect.

Cannot Access Glassfish4 Admin console via nginx location and proxy pass

Folks,
We have a java application running under Glassfish4. I wanted to disable direct access to the Glassfish admin server by closing 4848 at the firewall level and accessing it via a location directive in nginx (also offloading the SSL to nginx).
with asadmin enable-secure-admin turned on I can get into the admin server via https://foo.domain.com:4848 and administer it normally.
However when I disable secure admin via asadmin disable-secure-admin and access with the following location block
# Reverse proxy to access Glassfish Admin server
location /Glassfish {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://127.0.0.1:4848;
}
ala https://foo.domain.com/Glassfish I get a blank screen, and the only reference I can find in the nginx error logs is
2015/10/05 09:13:57 [error] 29429#0: *157 open() "/usr/share/nginx/html/resource/community-theme/images/login-product_name_open.png" failed (2: No such file or directory), client: 104.17.0.4, server: foo.domain.com, request: "GET /resource/community-theme/images/login-product_name_open.png HTTP/1.1", host: "foo.domain.com", referrer: "https://foo.domain.com/Glassfish"
Reading docs and on the net I do see that:
Secure Admin must be enabled to access the DAS remotely
Is what I'm trying to do simply impossible?
Edit: As requested below is the full nginx configuration.
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#sendfile off;
tcp_nopush on;
tcp_nodelay off;
#keepalive_timeout 65;
types_hash_max_size 2048;
# Default HTTP server on 80 port
server {
listen 192.168.1.10:80 default_server;
#listen [::]:80 default_server;
server_name foo-dev.domain.com;
return 301 https://$host$request_uri;
}
# Default HTTPS server on 443 port
server {
listen 443;
server_name foo-dev.domain.com;
ssl_certificate /etc/ssl/certs/foo-dev.domain.com.crt;
ssl_certificate_key /etc/ssl/certs/foo-dev.domain.com.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/foo-dev.domain.com.access.ssl.log;
# Reverse proxy access to foo hospitality service implementation at BC back-end
location /AppEndPoint {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://foo-dev.domain.com:8080;
}
# Reverse proxy to access Glassfish Admin server
location /Glassfish {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://127.0.0.1:4848;
}
# Reverse proxy access to all processed servers by both client and server component
location /messages {
alias /integration/archive/app-messages/;
autoindex on;
#auth_basic "Integration Team Login";
#auth_basic_user_file /integration/archive/app-messages/requests/.htpasswd;
}
}
}
The /AppEndPoint location block is the Glassfish application server which works properly, it's only the /Glassfish location block that's giving me trouble.
Ok thx, for your edit.
try with:
listen: 443 ssl;
btw a good config help is offered by Mozilla: SSL Generator
and if you forward request to location /Glassfish you will have to trim the request url to remove /Glassfish. Credits to Rewrite.
Btw does the rest of your config work on SSL?
Only change in proxy_pass the http for https
location / {
proxy_pass https://localhost:4848;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
}
As you ask, I suppose you are having problems accessing to the Glassfish Admin Console using nginx. However I share an example of entire nginx.conf file for Glassfish server.
Note that the 'proxy_pass' directive for location '/admin' should be https because is mandatory for glassfish access to Admin Console using https.
One reason that can cause you can't see the Admin Console is because when you access to the page, the resources aren't properly loaded. You can verify the different loaded resources using developer options of your preferred browser to see the generated URLs; what can show you a part of the solution.
With this configuration you should be able to access both parts of glassfish, main and admin console pages.
If you don't have DNS server, you can access using server IP.
The SSL certificates used where made as Self-signed only for test purposes, consider using a valid SSL certificate like Let's Encrypt or generated by a valid CA.
Ex:
http://192.168.1.15/glassfish
http://192.168.1.15/admin
The https redirection should work and finally you will be redirected at:
https://192.168.1.15/glassfish
https://192.168.1.15/admin
glassfish-ngix.conf
upstream glassfish {
server 127.0.0.1:8080;
}
upstream glassfishadmin {
server 127.0.0.1:4848;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
set $glassfish_server glassfish;
set $glassfish_admin glassfishadmin;
server_name mydomain.com;
# sample site certificates
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
ssl_trusted_certificate /etc/nginx/server.crt;
location /glassfish {
charset utf-8;
# limits
client_max_body_size 100m;
proxy_read_timeout 600s;
# buffers
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# gzip
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip_vary on;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://$glassfish_server/;
}
location ~* .(png|ico|gif|jpg|jpeg|css|js)$ {
proxy_pass https://$glassfish_admin/$request_uri;
}
location /admin {
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
proxy_pass_request_headers on;
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $http_pragma $http_authorization;
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:$server_port; #Very nb to add :$server_port here
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin *;
proxy_pass https://$glassfish_admin/;
}
}

Resources