I want to run a local nginx proxy to cache all the responses coming from a remote server.
This should be working but the only outcome I'm getting is a straight redirect from localhost:81 to www.stackoverflow.com. What am I missing?
proxy_cache_path /Temp/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
listen 81;
listen [::]:81;
server_name localhost;
location / {
proxy_pass https://www.stackoverflow.com/;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_ignore_headers Expires Cache-Control Set-Cookie Vary;
add_header X-Proxy-Cache $upstream_cache_status;
}
}
You miss what nginx is reverse proxy server, but not proxy server.
If you still want to do this with nginx, you need to take there are more steps. I found instruction on Russian for this
https://habr.com/ru/post/680992/
Related
I have cache set for accessing to images
proxy_cache_path /cache/images-cache/ levels=1:2 keys_zone=media:1m inactive=365d max_size=500m;
also I have nginx set
server {
server_name localhost;
listen 80;
location ~ "^/(?<id>.+)/(?<width>)/(?<height>)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache media;
proxy_cache_valid 200 365d;
proxy_cache_key $width-$height-$image;
}
How can I set logging so it shows which images are fetched from cache?
You can add a response header
add_header X-Cache-Status $upstream_cache_status always;
This will enable you to check if the URL was hit or not.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html
You can also use this variable $upstream_cache_status in your logs if you want to generate metrics or persist them in the logs.
Follow the example here
https://rtcamp.com/tutorials/nginx/upstream-cache-status-in-access-log/
and add/remove other variables at your convenience.
I have deployed a shiny app, running with a shiny server on AWS. A nginx server reroutes requests to the shiny server's port 3838.
When checking Google Page Speed Insights (https://developers.google.com/speed/pagespeed/insights/), I see that some images (.webp format) on my page are not cached, and that this slows down the loading of my page.
I tried to set up caching in nginx, as described here, by adding the following lines to my nginx server config:
location ~* \.(js|webp|png|jpg|jpeg|gif)$ {
expires 365d;
add_header Cache-Control "public, no-transform";
}
However, this had the consequence of my images not being found anymore when accessing the website.
Is is correct that I have to enable caching in nginx and not somewhere in shiny server?
If so, what is wrong with the solution above?
Here is the conf file of nginx without any additions:
server {
listen 80;
listen [::]:80;
# redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
# Reverse proxy
location / {
proxy_pass http://localhost:3838/climate-justice/;
proxy_redirect http://localhost:3838/climate-justice/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
}
Solved it with the help of the following instructions: https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-ubuntu-16-04
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
expires $expires;
. . .
Case:
I have REST API via HTTPS and I want to configure a basic caching proxy service on my host to cache API requests and get the same information faster, as usual.
I have the following configuration of Nginx:
proxy_cache_path /tmp/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
http {
server {
location /my_api/ {
proxy_redirect off;
proxy_buffering on;
proxy_ignore_headers X-Accel-Expires;
proxy_ignore_headers Expires;
proxy_ignore_headers Cache-Control;
proxy_cache my_cache;
proxy_pass https://example.com/myapi/;
}
}
}
And now I'm comparing the response time from REST API and my local proxy service, and it is the same for the REST API call to the remote service and to my local proxy service with caching, so, it means that caching doesn't work.
Also, the cache directory is empty.
The example or request to the real API (this is not the real case):
curl "https://example.com/myapi/?key=1"
Example of request to proxy:
curl "http://127.0.0.1:8080/myapi/?key=1"
In REST API headers I can see
cache-control: max-age=0, no-cache, no-store, must-revalidate
can Nginx ignore it somehow?
What should I change in the proxy configuration to see the boost for REST API?
I wonder if the issue can be related to HTTPS traffic? Or maybe the response from REST API has some NoChaching headers or the size of the response is too small for caching?
Finally found the way to configure caching for my REST API, here is the final configuration:
http {
proxy_cache_path /tmp/cache levels=1:2 keys_zone=my_cache:10m;
server {
listen 8080;
server_name localhost;
location /myapi {
proxy_buffering on;
proxy_ignore_headers Expires Cache-Control X-Accel-Expires;
proxy_ignore_headers Set-Cookie;
proxy_cache my_cache;
proxy_cache_valid 24h;
proxy_pass https://example.com/myapi;
}
}
In addition, if you are caching a REST api (e.g. GET and POST), I also suggest to add
proxy_cache_methods GET POST;
I have problem with Nginx acting as proxy serwer:
request -> NGINX PROXY -> app server (only one)
Proxy server is listening on port 443 and an application server on 80. Headers returned by upstream server are being removed by proxy. I was forced to use:
add_header 'Content-Length' $upstream_http_content_length;
It works ok for Content-Length, however it doesn't work with Last-Modified header. Curl request from Nginx proxy using private IP to upstream returns all the headers. Why does the Nginx proxy cut this header out even if it's return is specified using add_header?
I have following nginx.conf sample:
location /some-web-app {
proxy_pass http://backend/some-web-app;
proxy_redirect off;
proxy_redirect http $scheme;
proxy_set_header Host $host;
add_header 'Last-Modified' $upstream_http_last_modified;
add_header 'Content-Length' $upstream_http_content_length;
sub_filter 'codebase="http' 'codebase="https';
sub_filter_types application/x-java-jnlp-file;
access_log /var/log/nginx/some-web-app_access.log combined_jsession_upstream;
error_log /var/log/nginx/some-web-app_err.log;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
}
To forward returned last-modified header put following directive in location block:
sub_filter_last_modified on
To forward any other header use add_header with $upstream_http_${header}. Here I'll forward Content-Length header:
add_header 'Content-Length' $upstream_http_content_length;
I want to use nginx as a caching proxy in front of an OCSP responder. 'An OCSP request using the POST method is constructed as follows: The Content-Type header has the value "application/ocsp-request" while the body of the message is the binary value of the DER encoding of the OCSPRequest.' (from RFC2560)
Hence, I configured nginx as follows:
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
server {
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://213.154.225.237:80; #ocsp.cacert.org
proxy_cache my-cache;
proxy_cache_methods POST;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_key "$uri$request_body";
expires off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
)
I can access the OCSP responder through nginx and responses are received as expected - no issue.
The problem is that nginx doesn't cache the responses. Nonces are not being sent as part of the request. Using Wireshark I verified that all my requests are identical (on the HTTP layer). How to configure nginx that it caches the responses?
Note, I use the following command for testing:
openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url http://localhost/ -serial <SERIAL>
There is a lot more to caching OCSP responses than just caching the DER they are made of. Look into the lightweight OCSP profile and make sure that your responder does include the necessary headers into the response.
I would recommend that you use a specially build OCSP proxy cache, there are many out there. For example Axway's Validation Authority Repeater is a good choice.
In the meanwhile I got the answer at the mailinglist which solved my problem:
You configuration doesn't contain proxy_cache_valid (see
http://nginx.org/r/proxy_cache_valid), and in the same time via
proxy_ignore_headers it ignores all headers which may be used to
set response validity based on response headers. That is, no
responses will be cached with the configuration above.
You probably want to add something like
proxy_cache_valid 200 1d;
to your configuration.
My complete configuration example(works with openca-ocsp):
nginx.conf:
proxy_cache_path /var/cache/nginx/ocsp levels=1:2 min_free=1024M keys_zone=ocsp:10m;
conf.d/ocsp.conf
server {
listen 80;
proxy_cache ocsp;
proxy_cache_valid 200 404 2m;
proxy_cache_min_uses 1;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_methods POST;
proxy_cache_key "$request_uri|$request_body";
add_header X-GG-Cache-Status $upstream_cache_status;
location = /ocsp {
# Allow only POST
limit_except POST {
deny all;
}
proxy_pass http://ocspd:2560/;
}
}