Typical Odoo installation guide suggests to configure Nginx to cache paths like this:
/web/static
And they are skipping paths that look like this:
/web/image
/website/static
/web_sale/static
I would guess that they should also be cached. But what I'm wondering on is this:
/web/content
This is the path to js and css from assets_common, assets_frontend etc.
Does caching them can lead to some problems? I don't really see why it would be the case.
This is my Nginx config:
location ~* /[0-9a-zA-Z_]*/(static|image|content)/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo;
}
Related
Is there a way, not workaround, to process one location ~* <ADDRESS> {} but with different value of proxy_request_buffering (on/off) depending on content-type? For example, if multipart/form-data than proxy_request_buffering has to be off, and for other requests on. These kind of directives cannot be set dynamicly by variable or within IF condition. Hence, it should be something like one location as entrypoint that forwards requests to other sub-locations i suppose. But how can it be done? Please help.
It is application-specific thing, and one request type can be used for many purposes. That's why i cannot split them. Information about what type is stored within POST body. OR content-type is also good sign to decouple them.
The code example:
location ~* /application/service$ {
client_max_body_size '5000m';
client_body_buffer_size '1m';
proxy_request_buffering on;
proxy_buffering on;
rewrite_by_lua_file /etc/nginx/lua/service.lua;
include /etc/nginx/conf.d/common/reverse.conf;
proxy_pass $proxy_address;
}
The goal is to be able to set the derectives client_max_body_size and proxy_request_buffering based on content-type.
Client -bigfile----* *--> sub-location (buffering is off)
\ /
location
/ \
Client -regular----* *--> sub-location (buffering is on)
I've created an intranet http site where users can upload their files, I have created a location like this one:
location /upload/ {
limit_except POST { deny all; }
client_body_temp_path /home/nginx/tmp;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 10G;
proxy_set_header X-upload /upload/;
proxy_set_header X-File-Name $request_body_file;
proxy_set_body $request_body_file;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_pass http://localhost:8080/;
}
Quite easy as suggested in the official doc. When upload is complete the proxy_pass directive calls the custom URI and makes filesystem operations on newly created temp file.
curl --request POST --data-binary "#myfile.img" http://myhost/upload/
Here's my problem: I need to have some kind of custom hook/operation telling me when the upload begins, something nginx can call before starting the http stream, is there a way to achieve that ? I mean, before uploading big files I need to call a custom url (something like proxy_pass) to inform the server about this upload and execute certain operations.
Is there a way to achieve it ? I have tried with echo-nginx module but it didn't succeed with these http POST (binary form-urlencoded). I don't want to use external scripts to deal with the upload and keep these kind of operations inside nginx (more performant)
Thanks in advance.
Ben
Self replying.
I have found this directive in order to solve my own request.
auth_request <something>
So I can do something like:
location /upload/ {
...
# Pre auth
auth_request /somethingElse/;
...
}
# Newly added section
location /somethingElse/ {
...
proxy_pass ...;
}
This seems to be fine and working, useful for uploads as well as for general auth or basic prechecks
I'm trying to save packed (gzip) html in Memcached and use in from nginx:
load html from memcached by memcached module
unpack by nginx gunzip module if packed
process ssi insertions by ssi module
return result to user
mostly, configuration works, except ssi step:
location / {
ssi on;
set $memcached_key "$uri?$args";
memcached_pass memcached.up;
memcached_gzip_flag 2; # net.spy.memcached use second byte for compression flag
default_type text/html;
charset utf-8;
gunzip on;
proxy_set_header Accept-Encoding "gzip";
error_page 404 405 400 500 502 503 504 = #fallback;
}
Looks like, nginx do ssi processing before unpacking by gunzip module.
In result HTML I see unresolved ssi instructions:
<!--# include virtual="/remote/body?argument=value" -->
No errors in the nginx log.
Have tried ssi_types * -- no effect
Any idea how to fix it?
nginx 1.10.3 (Ubuntu)
UPDATE
Have tried with one more upstream. Same result =(
In the log, I see, ssi filter applied after upstream request, but without detected includes.
upstream memcached {
server localhost:11211;
keepalive 100;
}
upstream unmemcached {
server localhost:21211;
keepalive 100;
}
server {
server_name dev.me;
ssi_silent_errors off;
error_log /var/log/nginx/error1.log debug; log_subrequest on;
location / {
ssi on;
ssi_types *;
proxy_pass http://unmemcached;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location #fallback {
ssi on;
proxy_pass http://proxy.site;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
error_page 400 500 502 503 504 /offline.html;
}
}
server {
access_log on;
listen 21211;
server_name unmemcached;
error_log /var/log/nginx/error2.log debug; log_subrequest on;
location / {
set $memcached_key "$uri?$args";
memcached_pass memcached;
memcached_gzip_flag 2;
default_type text/html;
charset utf-8;
gunzip on;
proxy_set_header Accept-Encoding "gzip";
error_page 404 405 400 500 502 503 504 = #fallback;
}
location #fallback {
#ssi on;
proxy_pass http://proxy.site;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
error_page 400 500 502 503 504 /offline.html;
}
}
I want to avoid solution with dynamic nginx modules if possible
There are basically two issues to consider — whether the order of the filter modules is appropriate, and whether gunzip works for your situation.
0. The order of gunzip/ssi/gzip.
A simple search for "nginx order of filter modules" reveals that the order is determined at the compile time based on the content of the auto/modules shell script:
http://www.evanmiller.org/nginx-modules-guide.html
Multiple filters can hook into each location, so that (for example) a response can be compressed and then chunked. The order of their execution is determined at compile-time. Filters have the classic "CHAIN OF RESPONSIBILITY" design pattern: one filter is called, does its work, and then calls the next filter, until the final filter is called, and Nginx finishes up the response.
https://allthingstechnical-rv.blogspot.de/2014/07/order-of-execution-of-nginx-filter.html
The order of filters is derived from the order of execution of nginx modules. The order of execution of nginx modules is implemented within the file auto/modules in the nginx source code.
A quick glance at auto/modules reveals that ssi is between gzip and gunzip, however, it's not immediately clear which way the modules get executed (top to bottom or bottom to top), so, the default might either be reasonable, or, you may need to switch the two (which wouldn't necessarily be supported, IMHO).
One hint here is the location of the http_not_modified filter, which is given as an example of the If-Modified-Since handling on EMiller's guide above; I would imagine that it has to go last, after all the other ones, and, if so, then, indeed, it seems that the order of gunzip/ssi/gzip is exactly the opposite of what you need.
1. Does gunzip work?
As per http://nginx.org/r/gunzip, the following text is present in the documentation for the filter:
Enables or disables decompression of gzipped responses for clients that lack gzip support.
It is not entirely clear whether that the above statement should be construed as the description of the module (e.g., the clients lacking gzip support is why you might want to use this module), or whether it's the description of the behaviour (e.g., whether the module determines by itself whether or not gzip would be supported by the client). The source code at src/http/modules/ngx_http_gunzip_filter_module.c appears to imply that it simply checks whether the Content-Encoding of the reply as-is is gzip, and proceed if so. However, the next sentence in the docs (after the above quoted one) does appear to indicate that it has some more interaction with the gzip module, so, perhaps something else is involved as well.
My guess here is that if you're testing with a browser, then the browser DOES support gzip, hence, it would be reasonable for gunzip to not engage, hence, the SSI module would never have anything valid to process. This is why I suggest you determine whether the gunzip works properly and/or differently between doing simple plain-text requests through curl versus those made by the browser with the Accept-Encoding that includes gzip.
Solution.
Depending on the outcome of the investigation as above, I would try to determine the order of the modules, and, if incorrect, there's a choice whether recompiling or double-proxying would be the solution.
Subsequently, if the problem is still not fixed, I would ensure that gunzip filter would unconditionally do the decompression of the data from memcached; I would imagine you may have to ignore or reset the Accept-Encoding headers or some such.
We have a couple of backends sitting behind our nginx front ends.
Is it possible to intercept 301 / 302 redirects sent by these backends and have nginx handle them?
We were thinging something alone the lines of:
error_page 302 = #target;
But I doubt 301/302 redirects can be handled the same as 404's etc etc... I mean, error_page probably doesnt apply to 200, etc error codes?
So to summarize:
Our backends send back 301/302s once in a while. We would like to have nginx intercept these, and rewrite them to another location block, where we could do any number of other things with them.
Possible?
Thanks!
You could use proxy_redirect directive:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Nginx will still return 301/302 to the client but proxy_redirect will modify Location header and the client should make a new request to the URL given in the Location header.
Something like this should make the subsequent request back to nginx:
proxy_redirect http://upstream:port/ http://$http_host/;
I succeeded in solving a more generic case when a redirect location can be any external URL.
server {
...
location / {
proxy_pass http://backend;
# You may need to uncomment the following line if your redirects are relative, e.g. /foo/bar
#proxy_redirect / /;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
Alternative approach, which is closer to what you describe, is covered in ServerFault answer to this question: https://serverfault.com/questions/641070/nginx-302-redirect-resolve-internally
If you need to follow multiple redirects, modify Vlad's solution as follows:
1) Add
recursive_error_pages on;
to location /.
2) Add
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirect;
to the location #handle_redirects section.
More on proxy_redirect, for relative locations
Case
location /api/ {
proxy_pass http://${API_HOST}:${API_PORT}/;
}
the backend redirects to a relative location, which misses the /api/ prefix
the browser follows the redirection and hits a wall of incomprehension
Solution
location /api/ {
proxy_pass http://${API_HOST}:${API_PORT}/;
proxy_redirect ~^/(.*) http://$http_host/api/$1;
}
I'm running across a strange issue using nginx as a reverse proxy for some apps hosted on github.
The page and referenced JavaScript loads fine, but the images, eg: images/icon.png, are not loading. I'm getting around this for now by using sub_filter to rewrite the relative links to point to the original file address. This is more of a hack than actual fix.
Strangely, the Javascript library is also referenced as a relative link, eg scripts/app.js, and it is loading correctly. I was thinking maybe it's a problem with MIME types, but can't seem to make the images work without the URL rewrite.
Here's the location code snippet:
location ~* /app/data {
rewrite ^/app/data/(.*)$ /app-data/$1 break;
proxy_set_header Host myhost.github.io;
proxy_pass http://myhost.github.io;
gzip on;
gzip_types text/xml;
sub_filter_types text/html;
sub_filter_once off;
sub_filter \"img/ \"http://myhost.github.io/app-data/img/;
}