I have site served from S3 with Nginx with following Nginx configuration.
server {
listen 80 default_server;
server_name localhost;
keepalive_timeout 70;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript;
location / {
proxy_pass http://my-bucket.s3-website-us-west-2.amazonaws.com;
expires 30d;
}
At present whenever I build new version, I just delete target bucket contain and upload new frontend files to it.
Since I am deleting bucket contain, there is no way I can go back to previous version of frontend even versioning is enabled on bucket. So want to upload new frontend files into version dir (for example 15) in S3 bucket and then setup a redirect from http://my-bucket.s3-website-us-west-2.amazonaws.com/latest to http://my-bucket.s3-website-us-west-2.amazonaws.com/15
anyone knows how this can be done ?
There are multiple ways to do this:
The easiest may be through a symbolic link, provided that your environment allows that.
ln -fhs ./15 ./latest
Another option is an explicit external redirect issued to the user, where the user would see the new URL; this has a benefit in that multiple versions could be accessed at the same time without any sort of synchronisation issues, for example, if a client decides to do a partial download, everything should still be handy, because they'll most likely be doing the partial download on the actual target, not the /latest shortcut.
location /latest {
rewrite ^/latest(.*) /15$1 redirect;
}
The final option is an internal redirect within nginx; this is usually called URL masquerading in some third-party applications; this may or may not be recommended, depending on requirements; an obvious deficiency would be with partial downloads, where a resume of a big download may result in corrupted files:
location /latest {
rewrite ^/latest(.*) /15$1 last;
}
References:
http://nginx.org/r/location
http://nginx.org/r/rewrite
One of the simple ways to handle this situation is using variables. You can easily import a file to set the current latest version. You will need to reload your nginx config when you update the version with this method.
Create a simple configuration file for setting the latest version
# /path/to/latest.conf
set $latest 15;
Import your latest configuration in the server block, and add a location to proxy to the latest version.
server {
listen 80 default_server;
server_name localhost;
# SET LATEST
import /path/to/latest.conf;
location / {
proxy_pass http://s3host;
expires 30d;
}
# Note the / at the end of the location and the proxy_pass directive
# This will strip the "/latest/" part of the request uri, and pass the
# rest like so: /$version/$remaining_request_uri
location /latest/ {
proxy_pass http://s3host/$latest/;
expires 30d;
}
...
}
Another way to do this dynamically would be to use lua to script this behavior. That is a little more involved though, so I will not get into that for this answer.
Related
I've been following this article trying to host multiple websites on the same machine using IIS and Nginx.
Based on the provided article I produced the following nginx.conf:
http {
server {
listen 80;
server_name localhost;
keepalive_timeout 1;
gzip_types text/css text/plain text/xml application/xml application/javascript application/x-javascript text/javascript application/json text/x-json;
gzip_proxied no-store no-cache private expired auth;
gzip_disable "MSIE [1-6]\.";
# new website
location /bacon/ {
proxy_pass http://127.0.0.1:1500/;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location / {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
}
}
My old website is working just fine.
Yet when I try to access my new website I get the following errors:
Note that my new website works just fine if diretly requested trough http://127.0.0.1:1500/.
What am I missing here?
Url rewrite of proxy_pass directive works only with http request and http redirect in response. That means, that if http://127.0.0.1:1500/; will reply with HTTP 30x Location: http://127.0.0.1:1500/aaaa/, nginx will rewrite it to http://localhost/bacon/aaaa/.
But this rewrite dose not touch response body. Any links in response HTML will be same - <a href="/aaaa/", so no /bacon/ part here.
To fix it there is two ways. First - edit your application. Replace all links with /beacon/ prefix or use relative URL and add <base href="/bacon/"> in head on each file.
If edit of file is not possible, you can rewrite body with ngx_http_sub_module. There is doc of module http://nginx.org/en/docs/http/ngx_http_sub_module.html
In this way you need to add sub_filter for all html constructions where is link used. For example
sub_filter_once off;
sub_filter ' href="/' ' href="/bacon/';
sub_filter ' src="/' ' src="/bacon/';
just need to be careful. You should put all sub_filter to /bacon/ location.
Setup backend application is much preferred but sometimes only sub_filter can help.
Also there is third method but it can be used in some rare cases. If /flutter_servivce_worker.js doesn't exists in 127.0.0.1:8881 backend, you can add custom location for this file and proxy_pass to bacon backend:
location = /flutter_servivce_worker.js {
proxy_pass http://127.0.0.1:1500;
}
Sure this method can help in very limited cases when you are missing only few files and do not use any links.
I believe your first app is loading main.dart.js from the second (at root path) because you forgot to change <base href="/"> to <base href="/bacon/"> in index.html file.
There is nothing to do with NGINX.
For the new site, the request is routed fine and HTML is loaded onto the browser. But the dependant static file references of the application are still pointed to the base location path '/'.
Based on the chosen frontend language, the base route should be changed to /bacon
(or)
create a folder with the name bacon and place the built files in that folder and serve the static content simply using Nginx with webserver configuration
Have you tried?
# new website
location /new/ {
proxy_pass http://127.0.0.1:1500;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location /old/ {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
I'm trying to save packed (gzip) html in Memcached and use in from nginx:
load html from memcached by memcached module
unpack by nginx gunzip module if packed
process ssi insertions by ssi module
return result to user
mostly, configuration works, except ssi step:
location / {
ssi on;
set $memcached_key "$uri?$args";
memcached_pass memcached.up;
memcached_gzip_flag 2; # net.spy.memcached use second byte for compression flag
default_type text/html;
charset utf-8;
gunzip on;
proxy_set_header Accept-Encoding "gzip";
error_page 404 405 400 500 502 503 504 = #fallback;
}
Looks like, nginx do ssi processing before unpacking by gunzip module.
In result HTML I see unresolved ssi instructions:
<!--# include virtual="/remote/body?argument=value" -->
No errors in the nginx log.
Have tried ssi_types * -- no effect
Any idea how to fix it?
nginx 1.10.3 (Ubuntu)
UPDATE
Have tried with one more upstream. Same result =(
In the log, I see, ssi filter applied after upstream request, but without detected includes.
upstream memcached {
server localhost:11211;
keepalive 100;
}
upstream unmemcached {
server localhost:21211;
keepalive 100;
}
server {
server_name dev.me;
ssi_silent_errors off;
error_log /var/log/nginx/error1.log debug; log_subrequest on;
location / {
ssi on;
ssi_types *;
proxy_pass http://unmemcached;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location #fallback {
ssi on;
proxy_pass http://proxy.site;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
error_page 400 500 502 503 504 /offline.html;
}
}
server {
access_log on;
listen 21211;
server_name unmemcached;
error_log /var/log/nginx/error2.log debug; log_subrequest on;
location / {
set $memcached_key "$uri?$args";
memcached_pass memcached;
memcached_gzip_flag 2;
default_type text/html;
charset utf-8;
gunzip on;
proxy_set_header Accept-Encoding "gzip";
error_page 404 405 400 500 502 503 504 = #fallback;
}
location #fallback {
#ssi on;
proxy_pass http://proxy.site;
proxy_max_temp_file_size 0;
proxy_http_version 1.1;
proxy_set_header Connection "";
error_page 400 500 502 503 504 /offline.html;
}
}
I want to avoid solution with dynamic nginx modules if possible
There are basically two issues to consider — whether the order of the filter modules is appropriate, and whether gunzip works for your situation.
0. The order of gunzip/ssi/gzip.
A simple search for "nginx order of filter modules" reveals that the order is determined at the compile time based on the content of the auto/modules shell script:
http://www.evanmiller.org/nginx-modules-guide.html
Multiple filters can hook into each location, so that (for example) a response can be compressed and then chunked. The order of their execution is determined at compile-time. Filters have the classic "CHAIN OF RESPONSIBILITY" design pattern: one filter is called, does its work, and then calls the next filter, until the final filter is called, and Nginx finishes up the response.
https://allthingstechnical-rv.blogspot.de/2014/07/order-of-execution-of-nginx-filter.html
The order of filters is derived from the order of execution of nginx modules. The order of execution of nginx modules is implemented within the file auto/modules in the nginx source code.
A quick glance at auto/modules reveals that ssi is between gzip and gunzip, however, it's not immediately clear which way the modules get executed (top to bottom or bottom to top), so, the default might either be reasonable, or, you may need to switch the two (which wouldn't necessarily be supported, IMHO).
One hint here is the location of the http_not_modified filter, which is given as an example of the If-Modified-Since handling on EMiller's guide above; I would imagine that it has to go last, after all the other ones, and, if so, then, indeed, it seems that the order of gunzip/ssi/gzip is exactly the opposite of what you need.
1. Does gunzip work?
As per http://nginx.org/r/gunzip, the following text is present in the documentation for the filter:
Enables or disables decompression of gzipped responses for clients that lack gzip support.
It is not entirely clear whether that the above statement should be construed as the description of the module (e.g., the clients lacking gzip support is why you might want to use this module), or whether it's the description of the behaviour (e.g., whether the module determines by itself whether or not gzip would be supported by the client). The source code at src/http/modules/ngx_http_gunzip_filter_module.c appears to imply that it simply checks whether the Content-Encoding of the reply as-is is gzip, and proceed if so. However, the next sentence in the docs (after the above quoted one) does appear to indicate that it has some more interaction with the gzip module, so, perhaps something else is involved as well.
My guess here is that if you're testing with a browser, then the browser DOES support gzip, hence, it would be reasonable for gunzip to not engage, hence, the SSI module would never have anything valid to process. This is why I suggest you determine whether the gunzip works properly and/or differently between doing simple plain-text requests through curl versus those made by the browser with the Accept-Encoding that includes gzip.
Solution.
Depending on the outcome of the investigation as above, I would try to determine the order of the modules, and, if incorrect, there's a choice whether recompiling or double-proxying would be the solution.
Subsequently, if the problem is still not fixed, I would ensure that gunzip filter would unconditionally do the decompression of the data from memcached; I would imagine you may have to ignore or reset the Accept-Encoding headers or some such.
Scenario: I have two files "style.css" and "style.css.gz". Enabled modules gzip_static and gzip. Everything works properly, NGINX serve compressed "style.css.gz". Both files have the same timestamp. I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
gzip on;
gunzip on;
gzip_vary on;
gzip_static always;
gzip_disable "msie6";
gzip_proxied any;
gzip_comp_level 4;
gzip_buffers 32 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
text/cache-manifest
text/xml
text/css
text/plain
...........
Question: If i edit the "style.css" and change a few CSS rules, is possible to serve edited "style.css" instead of "style.css.gz"? (Based on timestamp or smthing like that) Or pre-compress new "style.css.gz" immediately after i finish editing "style.css"?
It is possible to operate using NGINX? Or what is the best solution?
Thanks
I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
You can just run this script manually after you have made any edit/changes to the css file.
It is possible to operate using NGINX? Or what is the best solution?
Alternatively, another solution I can think of (which is how I roll) is that nginx can actually serve and gzip compress files on the fly. Just make sure to disable use of precompressed files with ngx_http_gzip_static_module otherwise nginx will use the precompressed files instead.
gzip_static off;
Enables (“on”) or disables (“off”) checking the existence of precompressed files.
I am building a website which consists of a CherryPy application and some static files (CSS, JavaScript, images, et al). I am using Nginx as a reverse-proxy to serve the static files itself and to serve the application. I followed the configuration for this set-up as described here: How to Deploy Python WSGI Applications Using a CherryPy Web Server Behind Nginx.
But I am getting a strange behaviour from this configuration when trying to serve the static files. When any static file is requested, the content of the application is returned. For instance, I have two completely different files wsgi.py which returns an html document and style.css. If I navigate to style.css, it exists, but its content its content is the output of the application.
Additionally, anywhere I try to navigate, even to files that do not exist, the content returned is that of the application. But if I navigate to localhost/static/* I get a 404 error. Anywhere else (e.g. any gibberish like localhost/asha9rghu/ay98394h/jasdhiuah) and the content of the application is returned.
The error and access logs say everything is fine when the undesired content is being returned.
Here is the set-up in its simplest form where the problem still exists:
Root Directory
cherrypy_app
|
|----server.py
|----wsgi.py
|----style.css
server.py
from wsgi import application
import cherrypy
if __name__ == '__main__':
cherrypy.tree.graft(application, "/")
cherrypy.server.unsubscribe()
server = cherrypy._cpserver.Server()
server.socket_host = "0.0.0.0"
server.socket_port = 8080
server.thread_pool = 30
server.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
wsgi.py
def website():
website_html= """<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="/style.css" />
</head>
<body>
<p>Cute bunny rabbits.</p>
</body>
</html>"""
return (website_html)
def application(env, start_response):
start_response('200 OK', [('Content-Type', 'text/html')]
return [website()]
style.css
body{
color: #FFFFFF;
background: #000000;
}
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
upstream app_servers {
server 127.0.0.1:8080;
}
server {
listen 80;
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /home/user/cherrypy_app;
}
}
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
The problem lies solely in the Nginx configuration file. The solution is simply to create one or more of the directories I specified and put the static files in them.
Essentially, in the configuration file, I have specified to Nginx that there are a number of directories (images, javascript, et cetera) and that they are located at /home/user/cherrypy_app. But in fact, those directories do not exist. So when Nginx receives a request for a file in one of those directories, it expects them to be there, but they are not. In which case, Nginx produces a 404 error.
#Specifies the existence of these directories
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
#specifies the location of the directories
root /home/user/cherrypy_app;
I am not sure why I get the application content of the CherryPy application when I try to go to a file which explicitly does not exist. Maybe this a feature of CherryPy (redirect?). Either way, it is a non-issue at this point.
are there any steps to setup uwsgi with nginx with a simple wsgi python script. Most of the places I see only django and flask and other frameworks are being setup. Also I need steps to serve static files.. are there any .. ?
Obviously there are two steps: uwsgi configuration and nginx configuration.
The simplest uwsgi configuration is as follows (uwsgi accepts many different configuration formats, in this example I use xml):
<uwsgi>
<chdir>/path/to/your/script/</chdir>
<pythonpath>/path/to/your/script/</pythonpath>
<processes>2</processes>
<module>myscript.wsgi:WSGIHandler()</module>
<master/>
<socket>/var/run/uwsgi/my_script.sock</socket>
</uwsgi>
The only tricky option here is module, it should point to your WSGI handler class.
Also, make sure that /var/run/uwsgi/my_script.sock is readable and writeable for both uwsgi and nginx.
The corresponding nginx configuration would look like this:
server {
listen 80;
server_name my.hostname;
location / {
include uwsgi_params;
uwsgi_pass unix:/var/run/uwsgi/my_script.sock;
}
}
If you need to serve static files, the smiplest way would be to add following code to the server clause:
location /static/ {
alias /path/to/static/root/;
gzip on;
gzip_types text/css application/x-javascript application/javascript;
expires +1M;
}
This example already includes gzip compression and support for browser cache.