I wonder if there is a simple way (simple = nginx conf without any lua/perl extension) to achieve the following.
Given the following upstream server and listeners:
upstream backend{
server 1.2.3.4:9080;
server 1.2.3.4:9081;
server 1.2.3.4:9082;
}
server {
listen 8080;
listen 8081;
listen 8082;
...
proxy_pass backend;
}
The requirement is that all traffic that is connected to a given port, will be passed via proxy_pass to the equivalent port at upstream.
Perhaps the upstream would not be used in this case, rather I shall use $http_port or similar, any advise will be appreciated.
TL;DR: ngx_http_upstream_module isn't designed this way.
While it's possible by using "route method" for session affinity, such approach will bring you to very unclear config.
Consider using multiple server blocks. Something like:
server {
listen 8080;
location / {
proxy_pass http://1.2.3.4:9080;
}
}
server {
listen 8081;
location / {
proxy_pass http://1.2.3.4:9081;
}
}
# ...and so on
The solution I have ended up with was using Lua to set a variable (upstream_port) which then used as:
proxy_pass http://$upstream_host:$upstream_port$request_uri
Related
We have a need to set up multiple up-stream server, and use proxy_next_upstream to a backup, if the main server returns 404. However, the URI for up-stream backup server is different than the one towards main server, so I don't know whether this can be possible.
In detail, below config snippet works fine (if URIs are the same to all up-stream servers):
upstream upstream-proj-a {
server server1.test.com;
server server2.test.com backup;
}
server {
listen 80;
listen [::]:80;
server_name www.test.com;
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
For a request of http://test.com/proj/proj-a/file, it will first try to request http://server1.test.com/lib/proj/proj-a/file, if return 404 or timeout, then try http://server2.test.com/lib/proj/proj-a/file. This is good.
However, now for server2, it can only accept URL like http://server2.test.com/lib/proj/proj-a-internal/file, which is different than the URI towards the main server. If only considering the backup server, I can write like below:
proxy_pass http://server2.test.com/lib/proj/proj-a-internal
However looks like I can not have different proxy_pass for different upstream server combining proxy_next_upstream.
How can I achieve this?
I found a work-around using simple proxy_pass, and set local host as the backup upstream server, then do rewrite on behalf of the real backup upstream server.
The config is like below:
upstream upstream-proj-a {
server server1.test.com:9991;
# Use localhost as backup
server localhost backup;
}
server {
listen 80;
listen [::]:80;
resolver 127.0.1.1;
server_name www.test.com;
location /lib/proj/proj-a {
# Do rewrite then proxy_pass to real upstream server
rewrite /lib/proj/proj-a/(.*) /lib/proj/proj-a-internal/$1 break;
proxy_pass http://server2.test.com:9992;
}
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
}
It works fine, but the only side-effect is that, when a request needs to go to the backup server, it creates another new HTTP request from localhost to localhost which seems to double the load to nginx. The goal is to transfer quite big files, and I am not sure if this impacts performance or not, especially if all the protocols are https instead of http.
The are two web applications (websites) written on Go. One is turalasgar.pro (here I am using Go built-in server). Another is engossip.com (for now it displays the same ip as former). I have a vps. I know I should use Nginx, but have no idea how? I have heard of Caddy. Please, I need only nginx server, not Caddy. What I need is run two (or more) applications by using my same vps. How should I configure Nginx configuration? Whether by listening to different ports or to the same port. Practical advices and examples highly appreciated.
It's called reverse proxy. Each application uses it's own port to listen. And then you just point to them in nginx config:
server {
listen 80;
server_name turalasgar.pro;
location / {
proxy_pass http://localhost:8080;
...
}
}
server {
listen 80;
server_name engossip.com;
location / {
proxy_pass http://localhost:8081;
...
}
}
Well is really easy.
follow this guide:
https://www.digitalocean.com/community/tutorials/how-to-use-martini-to-serve-go-applications-behind-an-nginx-server-on-ubuntu
After you achieved one application working with martini+nginx just add another server block for the other app.
In case you need more information about server blocks:
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-14-04-lts
Above solutions I tried but didn't work for me
https://gist.github.com/soheilhy/8b94347ff8336d971ad0
server {
listen ...;
...
location / {
proxy_pass http://127.0.0.1:8080;
}
location /blog {
rewrite ^/blog(.*) /$1 break;
proxy_pass http://127.0.0.1:8181;
}
location /mail {
rewrite ^/mail(.*) /$1 break;
proxy_pass http://127.0.0.1:8282;
}
...
}
I tried to follow the instructions here, but I still can't seem to get it to work. As with the question, I don't expect some of the servers to be running at the time of starting nginx. I would actually prefer if instead a local 404 html were returned if the server was not running.
upstream main {
server my_main:8080;
}
server {
listen 80;
server_name www.my_site.com my_site.com;
location / {
resolver 8.8.8.8 valid=30s;
set $upstream_main main;
proxy_pass http://$upstream_main;
}
}
I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}
upstream app_front_static {
server 192.168.206.105:80;
}
Never seen it before, anyone knows, what it means?
It's used for proxying requests to other servers.
An example from http://wiki.nginx.org/LoadBalanceExample is:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
This means all requests for / go to the any of the servers listed under upstream XXX, with a preference for port 8000.
upstream defines a cluster that you can proxy requests to. It's commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing.
If we have a single server we can directly include it in the proxy_pass directive. For example:
server {
...
location / {
proxy_pass http://192.168.206.105:80;
...
}
}
But in case if we have many servers we use upstream to maintain the servers. Nginx will load-balance based on the incoming traffic, as shown in this answer.