I have cache set for accessing to images
proxy_cache_path /cache/images-cache/ levels=1:2 keys_zone=media:1m inactive=365d max_size=500m;
also I have nginx set
server {
server_name localhost;
listen 80;
location ~ "^/(?<id>.+)/(?<width>)/(?<height>)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache media;
proxy_cache_valid 200 365d;
proxy_cache_key $width-$height-$image;
}
How can I set logging so it shows which images are fetched from cache?
You can add a response header
add_header X-Cache-Status $upstream_cache_status always;
This will enable you to check if the URL was hit or not.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html
You can also use this variable $upstream_cache_status in your logs if you want to generate metrics or persist them in the logs.
Follow the example here
https://rtcamp.com/tutorials/nginx/upstream-cache-status-in-access-log/
and add/remove other variables at your convenience.
Related
i have an EC2 instance on AWS that i have deployed a MERN stack on, i have defined nginx as follows:
server {
#listen 80;
listen 80 default_server;
listen [::]:80 default_server;
server_name yourdomain.com;
access_log /home/ubuntu/client/server_logs/host.access.log main;
client_max_body_size 10M;
location /api/ {
add_header X-debug-message innnnnnnnnnnnnn;
proxy_pass http://localhost:3000/;
}
location /admin-dashboard {
root /home/ubuntu;
index index.html;
add_header X-uri "$uri";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
}
location / {
root /home/ubuntu/client/deploy;
index index.html index.htm;
try_files $uri $uri/ /index.html;
add_header X-uri "$uri";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
}
location = /49x.html {
root /usr/share/nginx/html;
}
server_tokens off;
location ~ /\.ht {
deny all;
}
}
And i have attached the security groups as an screenshots.
when i tried to fetch data with this url http://clikjo.com/api/ , using browser or postman it works perfectly, but when i try it using javascript with fetch or Axios it fails with this error:
[TypeError: Network request failed]
can anybody solve my problem?
i have tried to:
change my security groups
add headers, specify mode , fetch options , ... etc
If you load a page in your browser using HTTPS, the browser will refuse to load any resources over HTTP. As you've tried, changing the API URL to have HTTPS instead of HTTP typically resolves this issue. However, your API must not allow for HTTPS connections. Because of this, you must either force HTTP on the main page or request that they allow HTTPS connections.
Note on this: The request will still work if you go to the API URL instead of attempting to load it with AJAX. This is because the browser is not loading a resource from within a secured page, instead it's loading an insecure page and it's accepting that. In order for it to be available through AJAX, though, the protocols should match.
You are getting CORS error.
You need to fix it on server-side with additional header.
add_header Access-Control-Allow-Origin *;
I want to run a local nginx proxy to cache all the responses coming from a remote server.
This should be working but the only outcome I'm getting is a straight redirect from localhost:81 to www.stackoverflow.com. What am I missing?
proxy_cache_path /Temp/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
listen 81;
listen [::]:81;
server_name localhost;
location / {
proxy_pass https://www.stackoverflow.com/;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_ignore_headers Expires Cache-Control Set-Cookie Vary;
add_header X-Proxy-Cache $upstream_cache_status;
}
}
You miss what nginx is reverse proxy server, but not proxy server.
If you still want to do this with nginx, you need to take there are more steps. I found instruction on Russian for this
https://habr.com/ru/post/680992/
I could not decide the best name for the question.
Essentially what I want to achieve is to set a custom allowed body size for a specific location on the webserver.
On the other hand, I was able to achieve the necessary result already with duplicate code, so I am really looking for a way how to make the code reusable and to better understand the observed behavior.
The server reverse-proxies all API requests to the backend service.
In global nginx config /etc/nginx/nginx.conf I set the rule for max allowed body size like so client_max_body_size 50k;.
Then, in individual server config /etc/nginx/conf.d/example.com I have the following config (simplified):
server {
listen 80;
listen [::]:80;
server_name api.example.com www.api.example.com
location ~* /file/upload {
client_max_body_size 100M;
# crashes without this line
proxy_pass http://localhost:90;
#proxy_pass http://localhost:90/file/upload; # also works
}
location / {
# does not work
#location ~* /file/upload {
# client_max_body_size 100M;
#}
proxy_pass http://localhost:90;
}
}
I am trying to override the max body size for file upload endpoint. See that there is 1 proxy_pass for location /file/upload and another proxy_pass for location / pointing to the same internal service.
Question 1. If I remove the proxy_pass from the location /file/upload then error is returned by the server. (no status code in chrome debugger). Why is this happening? Shouldn't request be propagated further to location /?
Question 2. Why is it not possible to define the sublocation with body size override inside the / location as in commented section in example above? If I set it like this, then 413 error code is returned, which hints that the client_max_body_size rule is ignored..
Question 3. Finally, is it possible to tell nginx, after the request hits the /file/upload location - to apply all the rules from the / section? I guess one solution to this problem would be to move the common configuration into separate file and then import it in both sections.. I was thinking if there is any solution that does not require creating new files?
Here is the reusable config I am talking about basically:
location / {
#.s. kill cache. use in dev
sendfile off;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
# don't cache it
proxy_no_cache 1;
# even if cached, don't try to use it
proxy_cache_bypass 1;
proxy_pass http://localhost:90;
client_max_body_size 100M;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass_request_headers on;
}
This is not the final version. If I had to copy this piece of code to 2 sections this would not be very friendly approach. So, it would be nice to hear some nginx lifehacks on how accomplish what I try to accomplish in a most friendly way and get some explanations for observed behavior.
Answer 1
If I remove the proxy_pass from the location /file/upload then error is returned by the server. (no status code in chrome debugger). Why is this happening?
Every location have a so-called content handler. If you don't specify content handler explicitly via proxy_pass (fastcgi_pass, uwsgi_pass, etc.) directive, nginx will try to serve the request locally.
Shouldn't request be propagated further to location /?
Of course not. What makes you think it should?
Answer 2
Why is it not possible to define the sublocation with body size override inside the / location as in commented section in example above? If I set it like this, then 413 error code is returned, which hints that the client_max_body_size rule is ignored..
I'd rather expect you'll get the same error as in the first case since your nested location does not have an explicitly specified content handler via the proxy_pass directive. However the following config is worth to try:
location / {
# all the common configuration
location /file/upload {
client_max_body_size 100M;
proxy_pass http://localhost:90;
}
proxy_pass http://localhost:90;
}
Answer 3
Finally, is it possible to tell nginx, after the request hits the /file/upload location - to apply all the rules from the / section?
No, unless you use a separate file via include directive in both locations. However you can try to move all the upstream related setup directives one level up to the server context:
server {
...
# all the common configuration
location / {
proxy_pass http://localhost:90;
}
location /file/upload {
client_max_body_size 100M;
proxy_pass http://localhost:90;
}
}
Note that some directives (e.g. add_header, proxy_set_header) are inherited from the previous configuration level if and only if there are no those directives defined on the current level.
Very often dynamic settings for different locations can be achieved using the map block in a following way:
map $uri $max_body_size {
~^/file/upload 100M;
default 50k;
}
server {
location / {
...
client_max_body_size $max_body_size;
...
}
}
Unfortunally not every nginx directive accepts variable as its argument. Usually when nginx documentation doesn't explicitly states that some directive can accept variables, it means it cannot, and the client_max_body_size is exactly that kind of directive, so the above configuration won't work.
I'am trying to set up basic caching in my openresty nginx webserver. I have tried milion different combinations from many different tutorials, but I can't get it right. Here is my nginx.conf file
user www-data;
worker_processes 4;
pid /run/openresty.pid;
worker_rlimit_nofile 30000;
events {
worker_connections 20000;
}
http {
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m max_size=100m inactive=60m;
proxy_cache_key "$scheme$request_method$host$request_uri";
add_header X-Cache $upstream_cache_status;
include mime.types;
default_type application/octet-stream;
access_log /var/log/openresty/access.log;
error_log /var/log/openresty/error.log;
include ../sites/*;
lua_package_cpath '/usr/local/lib/lua/5.1/?.so;;';
}
And here is my server configuration
server {
# Listen on port 8080.
listen 8080;
listen [::]:8080;
# The document root.
root /var/www/cache;
# Add index.php if you are using PHP.
index index.php index.html index.htm;
# The server name, which isn't relevant in this case, because we only have one.
server_name cache.com;
# Redirect server error pages to the static page /50x.html.
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/cache;
}
location /test.html {
root /var/www/cache;
default_type text/plain;
try_files $uri /$uri;
expires 1h;
add_header Cache-Control "public";
proxy_cache cache;
proxy_cache_valid 200 301 302 60m;
}
}
Caching should work fine, there is nothing in error.log or access.log, caching system folder is empty, X-Cache header with $upstream_cache_status is not even showing, when I get headers from curl (curl -I). Now in my nginx (openresty) configuration there is no --without-ngx_http_proxy_module flag so the module is there. I have no idea what am I doing wrong please help.
You didn't define anything that can be cached: proxy_cache works togeher with proxy_pass.
The add_header defined inside the http block will be covered the one defined in the server block. Here is the snippet from the document about add_header
There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.
If the always parameter is specified (1.7.5), the header field will be added regardless of the response code.
So you cannot see the X-Cache header as expected.
I want to use nginx as a caching proxy in front of an OCSP responder. 'An OCSP request using the POST method is constructed as follows: The Content-Type header has the value "application/ocsp-request" while the body of the message is the binary value of the DER encoding of the OCSPRequest.' (from RFC2560)
Hence, I configured nginx as follows:
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
server {
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://213.154.225.237:80; #ocsp.cacert.org
proxy_cache my-cache;
proxy_cache_methods POST;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_key "$uri$request_body";
expires off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
)
I can access the OCSP responder through nginx and responses are received as expected - no issue.
The problem is that nginx doesn't cache the responses. Nonces are not being sent as part of the request. Using Wireshark I verified that all my requests are identical (on the HTTP layer). How to configure nginx that it caches the responses?
Note, I use the following command for testing:
openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url http://localhost/ -serial <SERIAL>
There is a lot more to caching OCSP responses than just caching the DER they are made of. Look into the lightweight OCSP profile and make sure that your responder does include the necessary headers into the response.
I would recommend that you use a specially build OCSP proxy cache, there are many out there. For example Axway's Validation Authority Repeater is a good choice.
In the meanwhile I got the answer at the mailinglist which solved my problem:
You configuration doesn't contain proxy_cache_valid (see
http://nginx.org/r/proxy_cache_valid), and in the same time via
proxy_ignore_headers it ignores all headers which may be used to
set response validity based on response headers. That is, no
responses will be cached with the configuration above.
You probably want to add something like
proxy_cache_valid 200 1d;
to your configuration.
My complete configuration example(works with openca-ocsp):
nginx.conf:
proxy_cache_path /var/cache/nginx/ocsp levels=1:2 min_free=1024M keys_zone=ocsp:10m;
conf.d/ocsp.conf
server {
listen 80;
proxy_cache ocsp;
proxy_cache_valid 200 404 2m;
proxy_cache_min_uses 1;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_methods POST;
proxy_cache_key "$request_uri|$request_body";
add_header X-GG-Cache-Status $upstream_cache_status;
location = /ocsp {
# Allow only POST
limit_except POST {
deny all;
}
proxy_pass http://ocspd:2560/;
}
}