I've been following this article trying to host multiple websites on the same machine using IIS and Nginx.
Based on the provided article I produced the following nginx.conf:
http {
server {
listen 80;
server_name localhost;
keepalive_timeout 1;
gzip_types text/css text/plain text/xml application/xml application/javascript application/x-javascript text/javascript application/json text/x-json;
gzip_proxied no-store no-cache private expired auth;
gzip_disable "MSIE [1-6]\.";
# new website
location /bacon/ {
proxy_pass http://127.0.0.1:1500/;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location / {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
}
}
My old website is working just fine.
Yet when I try to access my new website I get the following errors:
Note that my new website works just fine if diretly requested trough http://127.0.0.1:1500/.
What am I missing here?
Url rewrite of proxy_pass directive works only with http request and http redirect in response. That means, that if http://127.0.0.1:1500/; will reply with HTTP 30x Location: http://127.0.0.1:1500/aaaa/, nginx will rewrite it to http://localhost/bacon/aaaa/.
But this rewrite dose not touch response body. Any links in response HTML will be same - <a href="/aaaa/", so no /bacon/ part here.
To fix it there is two ways. First - edit your application. Replace all links with /beacon/ prefix or use relative URL and add <base href="/bacon/"> in head on each file.
If edit of file is not possible, you can rewrite body with ngx_http_sub_module. There is doc of module http://nginx.org/en/docs/http/ngx_http_sub_module.html
In this way you need to add sub_filter for all html constructions where is link used. For example
sub_filter_once off;
sub_filter ' href="/' ' href="/bacon/';
sub_filter ' src="/' ' src="/bacon/';
just need to be careful. You should put all sub_filter to /bacon/ location.
Setup backend application is much preferred but sometimes only sub_filter can help.
Also there is third method but it can be used in some rare cases. If /flutter_servivce_worker.js doesn't exists in 127.0.0.1:8881 backend, you can add custom location for this file and proxy_pass to bacon backend:
location = /flutter_servivce_worker.js {
proxy_pass http://127.0.0.1:1500;
}
Sure this method can help in very limited cases when you are missing only few files and do not use any links.
I believe your first app is loading main.dart.js from the second (at root path) because you forgot to change <base href="/"> to <base href="/bacon/"> in index.html file.
There is nothing to do with NGINX.
For the new site, the request is routed fine and HTML is loaded onto the browser. But the dependant static file references of the application are still pointed to the base location path '/'.
Based on the chosen frontend language, the base route should be changed to /bacon
(or)
create a folder with the name bacon and place the built files in that folder and serve the static content simply using Nginx with webserver configuration
Have you tried?
# new website
location /new/ {
proxy_pass http://127.0.0.1:1500;
proxy_http_version 1.1;
gzip_static on;
}
# old website
location /old/ {
proxy_pass http://127.0.0.1:8881;
proxy_http_version 1.1;
gzip_static on;
}
Related
I have site served from S3 with Nginx with following Nginx configuration.
server {
listen 80 default_server;
server_name localhost;
keepalive_timeout 70;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript;
location / {
proxy_pass http://my-bucket.s3-website-us-west-2.amazonaws.com;
expires 30d;
}
At present whenever I build new version, I just delete target bucket contain and upload new frontend files to it.
Since I am deleting bucket contain, there is no way I can go back to previous version of frontend even versioning is enabled on bucket. So want to upload new frontend files into version dir (for example 15) in S3 bucket and then setup a redirect from http://my-bucket.s3-website-us-west-2.amazonaws.com/latest to http://my-bucket.s3-website-us-west-2.amazonaws.com/15
anyone knows how this can be done ?
There are multiple ways to do this:
The easiest may be through a symbolic link, provided that your environment allows that.
ln -fhs ./15 ./latest
Another option is an explicit external redirect issued to the user, where the user would see the new URL; this has a benefit in that multiple versions could be accessed at the same time without any sort of synchronisation issues, for example, if a client decides to do a partial download, everything should still be handy, because they'll most likely be doing the partial download on the actual target, not the /latest shortcut.
location /latest {
rewrite ^/latest(.*) /15$1 redirect;
}
The final option is an internal redirect within nginx; this is usually called URL masquerading in some third-party applications; this may or may not be recommended, depending on requirements; an obvious deficiency would be with partial downloads, where a resume of a big download may result in corrupted files:
location /latest {
rewrite ^/latest(.*) /15$1 last;
}
References:
http://nginx.org/r/location
http://nginx.org/r/rewrite
One of the simple ways to handle this situation is using variables. You can easily import a file to set the current latest version. You will need to reload your nginx config when you update the version with this method.
Create a simple configuration file for setting the latest version
# /path/to/latest.conf
set $latest 15;
Import your latest configuration in the server block, and add a location to proxy to the latest version.
server {
listen 80 default_server;
server_name localhost;
# SET LATEST
import /path/to/latest.conf;
location / {
proxy_pass http://s3host;
expires 30d;
}
# Note the / at the end of the location and the proxy_pass directive
# This will strip the "/latest/" part of the request uri, and pass the
# rest like so: /$version/$remaining_request_uri
location /latest/ {
proxy_pass http://s3host/$latest/;
expires 30d;
}
...
}
Another way to do this dynamically would be to use lua to script this behavior. That is a little more involved though, so I will not get into that for this answer.
I've created an intranet http site where users can upload their files, I have created a location like this one:
location /upload/ {
limit_except POST { deny all; }
client_body_temp_path /home/nginx/tmp;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 10G;
proxy_set_header X-upload /upload/;
proxy_set_header X-File-Name $request_body_file;
proxy_set_body $request_body_file;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_pass http://localhost:8080/;
}
Quite easy as suggested in the official doc. When upload is complete the proxy_pass directive calls the custom URI and makes filesystem operations on newly created temp file.
curl --request POST --data-binary "#myfile.img" http://myhost/upload/
Here's my problem: I need to have some kind of custom hook/operation telling me when the upload begins, something nginx can call before starting the http stream, is there a way to achieve that ? I mean, before uploading big files I need to call a custom url (something like proxy_pass) to inform the server about this upload and execute certain operations.
Is there a way to achieve it ? I have tried with echo-nginx module but it didn't succeed with these http POST (binary form-urlencoded). I don't want to use external scripts to deal with the upload and keep these kind of operations inside nginx (more performant)
Thanks in advance.
Ben
Self replying.
I have found this directive in order to solve my own request.
auth_request <something>
So I can do something like:
location /upload/ {
...
# Pre auth
auth_request /somethingElse/;
...
}
# Newly added section
location /somethingElse/ {
...
proxy_pass ...;
}
This seems to be fine and working, useful for uploads as well as for general auth or basic prechecks
I'm having a problem with nginx's sub_filter rewrite rules not working with CSS files. I'm serving content on a path (/site) and need all URLs in JS & CSS to be prefixed correctly.
I've specified the mime type when linking in the CSS. From the template:
<link href="/static/css/site.css" type="text/css" rel="stylesheet" />
nginx has sub filters enabled and I've explicitly specified to include text/css:
location /site {
access_log /opt/logs/test/nginx-access-site.log combined if=$loggable;
error_log /opt/logs/test/nginx-errors-site.log error;
rewrite ^(/site)$ $1/;
rewrite ^/site(.+) $1 break;
sub_filter "test.domain.tld" "test.domain.tld/site";
sub_filter "'/" "'/site/";
sub_filter '"/' '"/site/';
sub_filter "http:" "https:";
sub_filter_types text/html text/css text/javascript;
sub_filter_once off;
include proxy_params;
proxy_pass http://site_test$1$is_args$args;
proxy_redirect http://test.domain.tld/ https://test.domain.tld/site/;
}
The reference to the CSS file is rewritten correctly. From the HTML output:
<link href="/site/static/css/site.css" type="text/css" rel="stylesheet" />
The issue is it's not rewriting within the CSS file, so image paths are incorrect:
.sortable th .asc {
background-image: url("/static/img/up_arrow.gif");
}
I've tried being overly permissive without any difference:
sub_filter_types *;
Have I misunderstood the use of sub_filter? I assumed that because the CSS was being served directly by nginx it would also be rewritten.
I found the solution after some searching and wasted some time trying some that didn't work so hopefully this helps someone else searching and found this post.
Apparently, by default sub_filter works only for text/html. I tried various other options to enable for text/javascript text/css like this, but which didn't work:
sub_filter_types text/xml text/css text/javascript;
Finally got it working by filtering all types like this:
sub_filter_once off;
sub_filter_types *;
Remember to restart nginx and remember to clear cache on your browser.
I am building a website which consists of a CherryPy application and some static files (CSS, JavaScript, images, et al). I am using Nginx as a reverse-proxy to serve the static files itself and to serve the application. I followed the configuration for this set-up as described here: How to Deploy Python WSGI Applications Using a CherryPy Web Server Behind Nginx.
But I am getting a strange behaviour from this configuration when trying to serve the static files. When any static file is requested, the content of the application is returned. For instance, I have two completely different files wsgi.py which returns an html document and style.css. If I navigate to style.css, it exists, but its content its content is the output of the application.
Additionally, anywhere I try to navigate, even to files that do not exist, the content returned is that of the application. But if I navigate to localhost/static/* I get a 404 error. Anywhere else (e.g. any gibberish like localhost/asha9rghu/ay98394h/jasdhiuah) and the content of the application is returned.
The error and access logs say everything is fine when the undesired content is being returned.
Here is the set-up in its simplest form where the problem still exists:
Root Directory
cherrypy_app
|
|----server.py
|----wsgi.py
|----style.css
server.py
from wsgi import application
import cherrypy
if __name__ == '__main__':
cherrypy.tree.graft(application, "/")
cherrypy.server.unsubscribe()
server = cherrypy._cpserver.Server()
server.socket_host = "0.0.0.0"
server.socket_port = 8080
server.thread_pool = 30
server.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
wsgi.py
def website():
website_html= """<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="/style.css" />
</head>
<body>
<p>Cute bunny rabbits.</p>
</body>
</html>"""
return (website_html)
def application(env, start_response):
start_response('200 OK', [('Content-Type', 'text/html')]
return [website()]
style.css
body{
color: #FFFFFF;
background: #000000;
}
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
upstream app_servers {
server 127.0.0.1:8080;
}
server {
listen 80;
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /home/user/cherrypy_app;
}
}
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
The problem lies solely in the Nginx configuration file. The solution is simply to create one or more of the directories I specified and put the static files in them.
Essentially, in the configuration file, I have specified to Nginx that there are a number of directories (images, javascript, et cetera) and that they are located at /home/user/cherrypy_app. But in fact, those directories do not exist. So when Nginx receives a request for a file in one of those directories, it expects them to be there, but they are not. In which case, Nginx produces a 404 error.
#Specifies the existence of these directories
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
#specifies the location of the directories
root /home/user/cherrypy_app;
I am not sure why I get the application content of the CherryPy application when I try to go to a file which explicitly does not exist. Maybe this a feature of CherryPy (redirect?). Either way, it is a non-issue at this point.
I'm running across a strange issue using nginx as a reverse proxy for some apps hosted on github.
The page and referenced JavaScript loads fine, but the images, eg: images/icon.png, are not loading. I'm getting around this for now by using sub_filter to rewrite the relative links to point to the original file address. This is more of a hack than actual fix.
Strangely, the Javascript library is also referenced as a relative link, eg scripts/app.js, and it is loading correctly. I was thinking maybe it's a problem with MIME types, but can't seem to make the images work without the URL rewrite.
Here's the location code snippet:
location ~* /app/data {
rewrite ^/app/data/(.*)$ /app-data/$1 break;
proxy_set_header Host myhost.github.io;
proxy_pass http://myhost.github.io;
gzip on;
gzip_types text/xml;
sub_filter_types text/html;
sub_filter_once off;
sub_filter \"img/ \"http://myhost.github.io/app-data/img/;
}