I'm working on a custom json api (no twig involved). During development, I need to keep making constant changes to the codebase, and every response gets cached for a few minutes or until I clrear the symfony cache.
I'm using a local nginx server, which should be properly configured since these are the headers I get:
Server: nginx/1.16.1
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
X-Powered-By: PHP/7.4.1
Cache-Control: max-age=0, private
Date: Fri, 24 Jul 2020 07:29:28 GMT
X-Debug-Token: f38aeb
X-Debug-Token-Link: http://localhost:8080/_profiler/f38aeb
X-Robots-Tag: noindex
Last-Modified: Friday, 24-Jul-2020 07:29:28 UTC
Cache-Control: private no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0
and responses are properly updated once I run bin/console c:c
I need to do this every time I change any class (controllers, services, models, whatever).
There must be something obvious I'm missing. Is there a way to disable class caching on my dev environment and not having to clear the cache for every little change?
Edited: adding relevant configuration.
This is my nginx .conf file:
server {
listen 80;
server_name ~.*;
location / {
root /app;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
fastcgi_read_timeout 1800;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /app/public/index.php;
# Disable cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'private no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
etag off;
}
error_log /dev/stderr debug;
access_log /dev/stdout;
}
I finally found the culprit. I'll leave this in case someone faces the same issue. I had opcache enabled in my dev environment, so it had nothing to do with symfony or the nginx location. I just disabled opcache and the issue was fixed.
opcache.enable=0
Related
Nginx has been setup as a reverse proxy but when a request is made, every other request gives a 404 error. Checking the log of the application running on port 9000 shows that the request doesn't reach the application.
The configuration for the reverse proxy is:
server {
listen 8088;
listen [::]:8088;
server_name www.example.com;
access_log /var/log/nginx/www.example.com.access.log;
error_log /var/log/nginx/www.example.com.com.error.log;
location / {
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_pass http://localhost:9000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The part
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
has been added to try to tackle the problem.
The access log shows the 200 and the 400 alternating:
x.x.x.x - - [02/Feb/2023:11:49:28 +0100] "GET /ping HTTP/1.1" 200 92 "-" "curl/7.68.0"
x.x.x.x - - [02/Feb/2023:11:50:41 +0100] "GET /ping HTTP/1.1" 404 19 "-" "curl/7.68.0"
This looks like a load balancer issue but no load balancer is installed. Doing multiple calls on localhost with url -i http://localhost:9000/ping doesn't show any problems.
Doing the same calls on localhost with the domain name, first call:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 02 Feb 2023 10:56:55 GMT
Content-Type: application/json; charset=UTF-8
Content-Length: 92
Connection: keep-alive
Access-Control-Allow-Origin: *
Vary: Origin
X-Frame-Options: DENY
Last-Modified: Thursday, 02-Feb-2023 10:56:55 GMT
Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0
and then the second call (and every other call):
HTTP/1.1 404 Not Found
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 02 Feb 2023 10:56:56 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 19
Connection: keep-alive
Cache-Control: max-age=31536000
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
404 page not found
There is a second Nginx service running in a docker container running on port 80. But this should not be an issue. The 404 is definitely coming from the non docker Nginx running as a service on the machine.
I can't find the issue at this moment, any advice where to look or what could cause the problem?
Add the $request_uri as follow:
proxy_pass http://localhost:9000$request_uri;
For proxying requests to the FastCgi servers (such as PHP-Fpm) use fastcgi module and fastcgi_pass directive instead proxy_pass.
I am deploying flask application on NGINX server. But here I am not able to disable cache in nginx. Due to this the changes in the code is not getting refleced. I have done following changes in my service :
code -
server {
listen 80;
server_name 192.168.149.197;
location / {
add_header Cache-Control "max-age=0, no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";
include uwsgi_params;
uwsgi_pass unix:/home/admin/test_flask/main.sock;
}
}
Please help to resolve it.
I'm using nginx to serve a documentation website that changes frequently. For this reason I decided to drop cache with the following:
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_no_cache 1;
proxy_cache_bypass 1;
However with this, for each page you are visiting on this site, it downloads each time an big js file (7mb) and all png/svg images, so I would like to drop cache for everything except for all png/svg and one js file that resides in the ROOT path of the project. Is possible with nginx?
Since you don't use the proxy_pass directive, tuning proxy_no_cache and proxy_bypass parameters makes no sense, you can safely remove that part from you config. For everything else the following should be enough to cache only selected files while do not cache everything else.
This should be placed to the http context:
map $uri $cacheable {
~\.(?:pn|sv)g$ 1;
/script.js 1;
}
map $cacheable $cache_control {
1 "public, max-age=31536000";
default "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
}
map $cacheable $expire {
1 1y;
default off;
}
This should be placed at the server context instead of your current configuration snippet:
add_header Cache-Control $cache_control;
expires $expire;
And take a look at this answer to not be surprised with add_header directive behavior.
I'm trying to set up caching for an upstream server (which I do not manage). Most files can be cached (and don't have Cache-Control set), those work fine.
However, some locations on the server are directory listings (and have Cache-Control: no-store). I'd like to cache those only if the server is not reachable.
Unfortunately, I either end up in one of the following:
In a situation where those listings are not cached (no file in the cache, header always shows a cache miss). If the server is not reachable, the directory listings are (obviously) not returned
In a situation where those listings are cached, but they never update afterwards (at least not as long as the cache is valid). Since I'd like to cache all of the other entries for a long time, the directory listings become outdated quickly.
I tried to modify the headers to stale-if-error, but that didn't seem to help either.
map $http_cache_control $http_updated_cache_control {
no-store stale-if-error;
}
server {
...
location /somewhere {
sendfile on;
sendfile_max_chunk 10m;
tcp_nopush on;
proxy_cache keyzone;
# allow using stale requests in case of errors or when updating a file
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_revalidate on;
proxy_cache_background_update on;
# add header to indicate if caching works
add_header X-Cache-Status $upstream_cache_status;
proxy_cache_lock on;
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_ignore_headers Set-Cookie;
# allow caching of non-cacheable entries only when the server is erroring
proxy_hide_header Cache-Control;
add_header Cache-Control $http_updated_cache_control;
# don't ignore the cache control header: some items (like directory listings) are marked as "don't cache")
#proxy_ignore_headers Cache-Control;
}
}
How can I cache entries with Cache-Control: no-store, but only use the cached entries if the upstream server is down?
I see two possibilities:
NGINX respects headers from the upstream server. So if the upstream sends Expires despite the Cache-Control: no-store then after you modify headers for NGINX they become Expires: ... Cache-Control: stale-if-error and it waits
at least as long as the cache is valid
proxy_cache_valid probably have the same effect
So you need to
either set some small value for proxy_cache_valid for location /somewhere
or/and remove Expires if it's present
or/and add max-age=0 to Cache-Control
I'm trying to implement universal links from my iOS app, thus I need to have the "apple-app-site-association" file at the root of my web server.
I am using nginx and everything is (supposedly) setup correctly. My website works, php works, my api works. The only problem is that when I go to my site "test.com/apple-app-site-association" the browser (as well as the iOS browser) downloads the file instead of just displaying it, thus making my universal link not work.
If anyone has any ideas on how to stop nginx from offering the site as a download and serving it instead I would be glad.
Below is my server's configuration, with my site edited out:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name test.com www.test.com;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-test.com.conf;
include snippets/ssl-params.conf;
client_max_body_size 1m;
root /var/www/test.com/web;
index index.html index.htm index.php;
access_log /var/log/nginx/test.com.access.log;
error_log /var/log/nginx/test.com.error.log;
#Default url rewrites
location / {
#root /var/www/test.com/web;
try_files $uri $uri/ /index.html;
autoindex off;
index index.html index.htm index.php;
}
location ~ \.php$ {
# try_files $uri =404;
set $path_info $fastcgi_path_info;
root /var/www/test.com/web;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
try_files $uri $uri/ /index.php$is_args$args;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param APP_ENV dev;
}
#Apple universal links
location = /apple-app-site-association {
default_type application/pkcs7-mime;
}
#Redirect all requests under /user/ to the "get App" page
location /user/ {
return 301 https://test.com/get_app.html;
# try_files $uri $uri/ /get_app.html;
}
#Let's Encrypt SSL Validation
include snippets/letsencrypt.conf;
}
Using a REST API client, requesting the file gives the following headers, that appear to be correct (no "attachment" header):
Server: nginx/1.10.3
Date: Sun, 02 Apr 2017 17:25:42 GMT
Content-Type: application/pkcs7-mime
Content-Length: 149
Last-Modified: Sun, 02 Apr 2017 16:07:02 GMT
Connection: keep-alive
Etag: "58e121a6-95"
Strict-Transport-Security: max-age=63072000; includeSubdomains
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Accept-Ranges: bytes
Your browser as well as the ios browser may not support the Content-Type: application/pkcs7-mime. If the content is human readable try setting Content-Type: text/plain.
Also see if this link is of any help.