I am new to NGINX and I am trying to load balance our ERP web servers.
I have 3 webserver running on port 80 powered by websphere which are a black box to me:
* web01.example.com/path/apphtml
* web02.example.com/path/apphtml
* web03.example.com/path/apphtml
NGINX is listening for the virtual URL ourerp.example.com and proxying it to the cluster.
Here is my config:
upstream myCluster {
ip_hash;
server web01.example.com:80;
server web02.example.com:80;
server web03.example.com:80;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ourerp.example.com;
location / {
rewrite ^(.*)$ /path/apphtml break;
proxy_pass http://myCluster;
}
}
When I only use proxy_pass, then NGINX load balances but forwards the request to web01.example.com and not web01.example.com/path/apphtml
When I try adding url rewrite, it simply rewrite the virtual URL and i end up with ourerp.example.com/path/apphtml.
Is it possible to do URL rewrite at the upstream level or append the path to the app at the upstream level?
If you are trying to map / to /path/apphtml/ through the proxy, use:
proxy_pass http://myCluster/path/apphtml/;
See this document for more.
The problem with your rewrite statement is a missing a $1 on the end of the replacement string. See this document for more, but as I indicated above, you do not need the rewrite statement, as the proxy_pass statement is capable of doing the same job anyway.
Related
I want to redirect all domain from www to non-www using Nginx config file nginx.conf.
I have tried using the below configuration but it only work for URL start with HTTP but does not work for HTTPS
I have added below server block
server {
server_name "~^(?!www\.).*" ;
return 301 $scheme://$1$request_uri ;
}
Since you didn't specify listening port in the server block you've shown in your question, it will listen on a plain HTTP TCP port 80 by default. You need to specify
listen 443 ssl;
to listen on an HTTPS TCP port 443. However to make the server block workable via the HTTPS protocol, you'd need to specify an SSL certificate/key (at least), and to made a user browser following a redirect returned by nginx, that certificate should be a valid one, issued for the domain name you want to be redirected, or the browser will complain about invalid certificate and won't follow the redirect location.
So if you want to use some kind of universal server block for redirecting every HTTPS request from www to non-www domain, it will be impossible unless you have a certificate that include every domain name you want do redirect (which seems to be impossible to have for a custom non-predefined list of domain names).
Update
Although this isn't a thing I'd do for myself in a production environment, actually there is a way to achieve workable solution using the lua-resty-auto-ssl (see the documentation examples), OpenResty/lua-nginx-module and the following sever block (remember that server names specified by domain prefix have the lowest priority comparing to exact matched server names, e.g. www.example.com, or server names specified by domain suffix, e.g. *.example.com):
init_by_lua_block {
auto_ssl = (require "resty.auto-ssl").new()
auto_ssl:set("allow_domain", function(domain)
return true
end)
auto_ssl:init()
}
map $host $basename {
~^www\.(.+) $1;
default $host;
}
server {
listen 443 ssl;
server_name www.*;
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /path/to/dummy.crt;
ssl_certificate_key /path/to/dummy.key;
return 301 https://$basename$request_uri;
}
In order for this to work you'd also need the corresponding plain HTTP block to allow ACME challenge(s) to be successfully completed:
server {
listen 80;
server_name www.*;
location / {
return 301 https://$basename$request_uri;
}
location /.well-known/acme-challenge/ {
content_by_lua_block {
auto_ssl:challenge_server()
}
}
}
We have a need to set up multiple up-stream server, and use proxy_next_upstream to a backup, if the main server returns 404. However, the URI for up-stream backup server is different than the one towards main server, so I don't know whether this can be possible.
In detail, below config snippet works fine (if URIs are the same to all up-stream servers):
upstream upstream-proj-a {
server server1.test.com;
server server2.test.com backup;
}
server {
listen 80;
listen [::]:80;
server_name www.test.com;
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
For a request of http://test.com/proj/proj-a/file, it will first try to request http://server1.test.com/lib/proj/proj-a/file, if return 404 or timeout, then try http://server2.test.com/lib/proj/proj-a/file. This is good.
However, now for server2, it can only accept URL like http://server2.test.com/lib/proj/proj-a-internal/file, which is different than the URI towards the main server. If only considering the backup server, I can write like below:
proxy_pass http://server2.test.com/lib/proj/proj-a-internal
However looks like I can not have different proxy_pass for different upstream server combining proxy_next_upstream.
How can I achieve this?
I found a work-around using simple proxy_pass, and set local host as the backup upstream server, then do rewrite on behalf of the real backup upstream server.
The config is like below:
upstream upstream-proj-a {
server server1.test.com:9991;
# Use localhost as backup
server localhost backup;
}
server {
listen 80;
listen [::]:80;
resolver 127.0.1.1;
server_name www.test.com;
location /lib/proj/proj-a {
# Do rewrite then proxy_pass to real upstream server
rewrite /lib/proj/proj-a/(.*) /lib/proj/proj-a-internal/$1 break;
proxy_pass http://server2.test.com:9992;
}
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
}
It works fine, but the only side-effect is that, when a request needs to go to the backup server, it creates another new HTTP request from localhost to localhost which seems to double the load to nginx. The goal is to transfer quite big files, and I am not sure if this impacts performance or not, especially if all the protocols are https instead of http.
I'm using nginx as a reverse proxy for my public EC2 instance. I have:
A nice, clean public domain
An AWS-generated public domain (*.compute-1.amazonaws.com)
An AWS-generated public IP address
I would like to have all traffic go over HTTPS to the public domain. I attempted to do this by creating a "primary" server block configured to route to my application, with two secondary server blocks to catch all other traffic and redirect to https://public.domain.com. This is what my config looks like:
# "Primary" block
server {
listen 443 ssl;
server_name public.domain.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
# Other config; SSL config
}
# Catch-all redirects
server {
listen 80;
return 301 https://public.domain.com$request_uri;
}
server {
listen 443;
return 301 https://public.domain.com$request_uri;
}
In testing this, I get the following results:
http://public.domain.com >> https://public.domain.com (correct)
http://2308.compute-1.amazonaws.com >> https://public.domain.com (correct)
https://2308.compute-1.amazonaws.com >> No redirect (WRONG!)
http://55.255.255.255 >> https://public.domain.com (correct)
https://55.255.255.255 >> No redirect (WRONG!)
Why is nginx not redirecting my HTTPS traffic to my public domain? Is the server_name not used in the URL matching process?
You do not have default server set on port 443, so nginx takes the first defined host, which is server_name public.domain.com;
Use listen 443 ssl default_server, also you need a wildcard certificate for your redirect server for this config to work (self-signed, clients will show a warning anyway, if host does not match)
See https://serverfault.com/questions/578648/properly-setting-up-a-default-nginx-server-for-https
I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.
I want http://example.com/test/4000(or some other number) to proxy_pass to http://localhost:4000/test.
Is this possible, and how do I do this?
Here is example:
server {
listen 80;
server_name example.com;
# default port for proxy
set $port 80;
# redirect /4000 --> /4000/ just to be sure
# that next rewrite works properly
rewrite ^(/\d+)$ $1/ redirect;
# rewrite /4000/path/to/file to /path/to/file
# and store port number to variable
rewrite ^/(?<port>\d+)(.+)$ $2;
location / {
# proxy request to localhost:$port
proxy_pass http://localhost:$port;
}
}
It proxies request to /PORT_NUMBER/path/to/file to localhost:PORT_NUMBER/path/to/file. If url doesn't start with port number it falls back to 80 so request to /path/to/file will be proxied to localhost:80/path/to/file.
There is no check if port number is valid so it's not recommended to use it for production. But using variable port number in production is really bad idea anyway.