Say I own a domain name: domain, and I host a static blog at www.domain.com. The advantage of having a static site is that I can host it for free on sites like netlify.
I'd now like to have several static webapps under the same domain name, so I don't have to purchase a domain for each webapp. I can do this by adding a subdomain for my apps. Adding a subdomain is easy enough. This video illustrates how to do it with GoDaddy for example. I can create a page for my apps called apps.domain.com where apps is my subdomain.
Say, I have several static webapps: app1, app2, app3. I don't want a separate subdomain for each of these, e.g., app1.domain.com. What I'd like instead is to have each app as a subfolder under the apps subdomain. In other words, I'd like to have the following endpoints:
apps.domain.com/app1
apps.domain.com/app2
apps.domain.com/app3
At the apps.domain.com homepage, I'll probably have a static page listing out the various apps that can be accessed.
How do I go about setting this up? Do I need to have a server of some sort (e.g., nginx) at apps.domain.com? The thing is I'd like to be able to develop and deploy app1, app2, app3 etc. independently of each other, and independently of the apps subdomain. Each of these apps will probably be hosted by netlify or something similar.
Maybe there's an obvious answer to this issue, but I have no idea how to go about it at the moment. I would appreciate a pointer in the right direction.
Something along the lines of below should get you started if you decide to use nginx. This is a very basic setup. You may need to tweak it quite a bit to suit your requirements.
apps.domain.com will serve index.html from /var/www
apps.domain.com/app1 will server index.html from /var/www/app1
apps.domain.com/app2 will server index.html from /var/www/app2
apps.domain.com/app3 will server index.html from /var/www/app3
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name apps.domain.com;
root /var/www;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /app1 {
}
location /app2 {
}
location /app3 {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I initially solved this problem using nginx. But, I was very unhappy with that because I needed to pay for a server, and needed to set up the architecture for it etc.
The easiest way to do this, that I know of today, is to make use of URL rewrites. E.g. Netlify rewrites, Next.js rewrites.
Rewrites allow you to map an incoming request path to a different destination path.
Here is an example usage in my website.
Just one addition: if you're hosting the apps on an external server you might want to setup nginx and use the proxy plugin to forward incoming requests from your nginx installation to the external webserver:
web-browser -> nginx -> external-web-server
And for the location that needs to be forwarded:
location /app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass https://url-of-external-webserver;
}
It would seem that you're asking the question prematurely — what actual issues are you having when doing what you're trying to do using the naive approach?!
It is generally the best idea to have each app run on its own domain or subdomain; this is done to prevent XSS attacks, where vulnerability in one of your apps may result in your whole domain becoming vulnerable. This is because security features are generally implemented in the browser on a per-domain basis, where it is presumed that the whole domain is under the control of a single party (e.g., running a single app, at the end of the day).
Otherwise, there's really nothing that special that must be done to have multiple apps on a single domain. Provided that your paths within each app are correct (e.g., they're either relative, or absolute with the full path of the location of the specific app), there's really not any specific issues to be aware of, frankly.
Related
I am trying to implement http2_push using nginx on windows 7. I followed steps mentioned in this article.
I'm running nginx 1.13.12 executable version. Have created & installed self signed certificates and it is working fine.
As mentioned in this answer, I checked and solved the certificate validation issue as well.
Still the files I want to push is not getting pushed into the browser. I am checking it through the network tab in inspector (Google Chrome - Screenshot attached).
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 443 ssl http2;
server_name localhost;
ssl_certificate ssl/localhost.crt;
ssl_certificate_key ssl/localhost.key;
location = /test.html {
root html;
http2_push /stylepush.css;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Output (Screenshot):
Can anyone help me out where I am going wrong? Thanks for the help in advance.
HTTP/2 push only works when the pushed resource is needed by the page (i.e. it's referenced in the HTML). In this case, the fact that /stylepush.css is not loaded by the page at all (never mind by Push as the initiator) shows it is not being used.
If you go to chrome://net-internals/#http2 you should see this as an unclaimed push:
Add a reference to this CSS file in your HTML and you should see it as pushed.
If not, then go to chrome://net-internals/#events&q=type:HTTP2_SESSION in Chrome and provide the HTTP/2 Session data.
Additionally Chrome requires a recognised certificate before it allows you to cache resources (and HTTP/2 resources are pushed into a cache before they are uses). Since Chrome Version 58, they also require the Subject Alternative Name (SAN) to be be set on the certificate, which requires some extra config to set when creating a self-signed certificate.
I have three sites configured on my server using NGINX and the first two are working fine. One is a static site and one is running Rails (using Unicorn). I have attempted to mirror the NGINX/Unicorn configurations.
For the non-working site, I get "problem loading site" in my browser and absolutely nothing in my NGINX error logs (even at debug level) or my Unicorn log. I also get nothing when I attempt to cURL to the site.
I have double checked DNS by pinging domain name and am running out of ideas. I've also tried making this the default server and browsing by IP address.
Thoughts on how I should go about debugging? I would like to at least understand if NGINX is seeing these requests or not.
NGINX configuration:
upstream unicorn-signup {
server unix:/home/signup/app/tmp/sockets/unicorn.sock;
}
server {
listen 80;
listen [::]:80;
root /home/signup/app/current/public;
server_name signup.quote2bill.com;
# configure for Unicorn (NGINX acts as reverse proxy)
location / {
try_files $uri #unicorn;
}
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn-signup;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}
Fixed! It was the dreaded force_ssl flag in my production configuration. For future travelers, here is how I went about troubleshooting:
Went on a Costco run to clear my mind and buy huge quantities of stuff.
To determine if it was a DNS, NGINX or Unicorn/Rails problem, I replaced my NGINX configuration with a very simple one and placed a simple index.html in my public root. This worked fine - which lets DNS off the hook (I could resolve the domain name at the web server).
I diff'd the working and non-working NGINX configuration files for the nth time and made them as close as possible but didn't find anything.
Then I noticed that when I was serving the simple index.html file in #2 above, the domain was not getting redirected to https:// but when switched to my "normal" Unicorn/Rails version, I was always getting redirected.
I searched for Rails redirecting to SSL and remembered the force_ssl flag.
I checked my two projects and noticed the flag was not set in the working project, but set in the non-working one (smoking gun).
I changed, committed, redeployed and reloaded the browser and it... didn't work (!) Fortunately, I had the good sense to clear browser cache and try again and it is all good now.
Hope this helps someone.
I am trying to build a docker-registry server from source (not as a container) on Ubuntu 14.04.1. I was able to get most of the way there using the instructions found on digitalocean.
I am able to curl http://localhost:5000 and https://user:password#localhost:8000 with no problems
When I try to open a web browser to see hopefully more than just that, that is when the issues seem to happen.
Here is my docker-registry file in /etc/nginx/sites-available/:
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary
upstream docker-registry {
server 192.168.x.x:5000;
}
server {
listen 8000;
server_name docker-registry;
ssl on;
ssl_certificate /etc/nginx/ssl/docker-registry.crt;
ssl_certificate_key /etc/nginx/ssl/docker-registry.key;
proxy_set_header Host $http_host; # required for Docker client sake
X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file docker-registry.htpasswd;
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
I have my docker registry stored locally in /var/docker-registry and ensured that it was readable by the www-data user. Why can I not see my images on the web browser?
If I tag an image and push it to my repository it works, I can see it in the web browser:
https://192.168.x.x:8000/v1/repositories/ubuntu-test/tags/latest
I see the following:
"5ba9dab47459d81c0037ca3836a368a4f8ce5050505ce89720e1fb8839ea048a"
When I try to get to:
https://192.168.x.x:8000/v1
Or:
https://192.168.x.x:8000/v1/repositories
Or:
https://192.168.x.x:8000/v1/images
I get a "not found" error
How would I be able to see everything in my /var/docker-registry folder (which is where these are stored....and yes, they are owned by the www-data user) through the web interface?
This is by design. Not only is there no reason one would implement the entire url path, but there are severe security implications with implementing it.
I'm assuming you don't have much experience with web programming. There is no directory '/v1/repositories'... etc. Instead, there is a program (in this case either Python or Ruby) that is listening for the url path and has logic built-in to determine what to do.
i.e. if url = /v1/_ping: return 'ok'
My title isn't the best, my knowledge of webstuff is quite basic, sorry.
What I want to achieve
I have one box fanbox running nginx on Archlinux that I use as main entry point to my home LAN from the internet (namely work where I can only get out to port 80 and 443) via the reverse proxy facility using a changing domain name over which I have no control and that we will call home.net for now.
fanbox has its ports 80 and 443 mapped to home.net, that part was easy.
I have 2 webservers behind the firewall, web1.lan, web2.lan, web2ilo.lan. Both of these have various applications (which may have same name on different machines) that can directly be accessed on the LAN via standard URLs (the names are given as examples, I have no control over the content):
http://web1.lan/phpAdmin/
http://web1.lan/gallery/
http://web2.lan/phpAdmin/
http://web2.lan/dlna/
...and so on...
Now web2ilo.lan is a particular case. It's the out of band management web interface of the HP server web2.lan. That particular webserver offers only 1 application, so it can only be accessed via its root URL:
http://web2ilo/login.html
My goal is to access these via subpath of home.net like this:
http://home.net/web1/phpAdmin/
http://home.net/web1/gallery/
http://home.net/web2/phpAdmin/
http://home.net/web2/dlna/
http://home.net/web2ilo/login.html
My problem
That nearly works but the web applications tend to rewrite URLs so that after I login to, respectively:
http://home.net/web1/phpAdmin/login.php
http://home.net/web2ilo/login.html
the browser is redirected respectively to
http://home.net/phpAdmin/index.php
http://home.net/index.html
Note that the relative subpaths web1 and web2ilo have gone, which logically give me a 404.
My config
So far, I've searched a lot and I tried many options in nginx without understanding too much what I was doing. Here is my config that reproduces this problem. I've left SSL out for clarity.
server {
listen 443 ssl;
server_name localhost;
# SSL stuff left out for clarity
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /web1/ {
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass https://web1.lan/;
}
location /web2/ {
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass https://web2.lan/;
}
location /web2ilo/ {
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass https://web2ilo.lan/;
}
}
After first answers
After the first couple of answers (thanks!), I realise that my setup is far from common and that I may be heading for trouble all alone.
What would then be a better idea to access the webserver behind the firewall without touching frontend ports and domain/hostname ?
You may wish to consider the use of setting proxy_redirect to let nginx know that it should modify the "backend" server response headers (Location and Refresh) to the appropriate front-end URLs. You can either use the default setting to allow nginx to calculate the appropriate values from your location and proxy_pass directives, or explicitly specify the mappings like below:
proxy_redirect http://web1.lan/ /web1/
See: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Note: This only affects response headers - not any links in HTML content, or any Javascript.
If you experience problems with links in content or Javascript, you can either modify the content on the backend servers (which you've indicated may not be possible), or adjust your proxy solution such that front end paths are the same as the back end ones (e.g., rather than http://frontend/web1/phpAdmin you simply have http://frontend/phpAdmin). This would entail adding location directives for each application, e.g.,
location /phpAdmin/ {
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass https://web1.lan/phpAdmin/;
}
Here is a tested example.
In my docker-compose.yml I use the demo image whoami to test:
whoareyou:
image: "containous/whoami"
restart: unless-stopped
networks:
- default
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoareyou.rule=Path(`/whoareyou`)"
- "traefik.http.routers.whoareyou.entrypoints=web"
- "traefik.http.routers.whoareyou.middlewares=https-redirect#file"
- "traefik.http.routers.whoareyou-secure.rule=Path(`/whoareyou`)"
- "traefik.http.routers.whoareyou-secure.entrypoints=web-secure"
- "traefik.http.routers.whoareyou-secure.tls=true"
In my config.yaml I have my https redirect:
http:
middlewares:
https-redirect:
redirectScheme:
scheme: https
I did not find an answer to my question and decided to try a different approach:
I now have containerized most of my servers
I use Traefik (https://containo.us/traefik/) as my "fanbox" (= reverse proxy) as it covers my needs but also solves in a quite easy fashion the SSL certificates stuff.
Thanks all for your suggestions.
I have an ember.js application I developped on my local machine. I use a restify/node.js server to make it available locally.
When I navigate in my application, the address bar changes like this:
Example 1
1. http://dev.server:3000/application/index.html#about
2. http://dev.server:3000/application/index.html#/items
3. http://dev.server:3000/application/index.html#/items/1
4. http://dev.server:3000/application/index.html#/items/2
I try now to deploy it on a remote test server which runs nginx.
Although everything works well locally, I can navigate into my web application but the part of the URI that is after the hashtag is not updated.
In any browser: http://test.server/application/index.html is always displayed in my address bar. For the same sequence of clicks as in Exemple 1, I always have:
1. http://web.redirection/application/index.html
2. http://web.redirection/application/index.html
3. http://web.redirection/application/index.html
4. http://web.redirection/application/index.html
Moreover, if I directly enter a complete URI http://web.redirection/application/index.html#/items/1 the browser will only display the content that is at http://test.server/application/index.html (which is definitely not the expected behaviour).
I suppose this come from my NGINX configuration since the application works perfectly on a local restify server.
NGINX configuration for this server is:
test.server.conf (which is symlinked into /etc/nginx/sites-enabled/test.server.conf)
server {
server_name test.server web.redirection;
root /usr/share/nginx/test;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location ~ \.csv$ {
alias /usr/share/nginx/test/$uri;
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
EDIT:
Just to be sure that there were no missing files on my test server: I ran a restify/node server (like on my dev machine) and everything works fine when I connect to this server (!). Both nginx and restify servers points to the same files.
EDIT 2
I discovered that my problem happens when I use a web redirection.
If I use an address like http://test.server/application/index.html everything works fine
If I use http://web.redirection/application/index.html it does not work.
So this is my nginx conf that is not correctly redirecting web.redirection URI to test.server or something like that.
Does someone has an idea ? What do I miss ? What should I change to make this work ?
EDIT 3 and solution
The web redirection I used was an A type DNS record. This does not work. Using a CNAME type DNS record solves the issue.
No, this has nothing to do with nginx, any thing past the # is never sent to the server, a javascript code should handle this, I would suggest to use firebug or any inspector to make sure that all your js files are being loaded, and nothing fails with a 404 error, also check for console errors on the inspector console.
The problem came from the DNS redirection from web.redirection to test.server.
It was an A-type record: this does not work.
Using a CNAME-type record that points directly to test.server works.