nginx best practices for reloading servers - nginx

I have a nginx config which has:
events {
worker_connections 1024;
}
http {
upstream myservers {
server server1.com:9000;
server server2.com:9000;
server server3.com:9000;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://myservers;
}
}
}
I need to reload the servers and the method I am using is to bring up the new servers on port 9001 and then do nginx -s reload with the following modification to the config:
upstream myservers {
server server1.com:9000 down;
server server2.com:9000 down;
server server3.com:9000 down;
server server1.com:9001;
server server2.com:9001;
server server3.com:9001;
}
Then I bring down the old servers. However, before I bring down the old servers, I need to make sure all workers that were handling requests to these old servers are done. How do I check this? Also, is this the best way to reload backend servers with the free version of nginx?

Related

nginx cant access to backend server in vps but we can access to backend by browser directly

The configuration of nginx is as follows
events {
worker_connections 4096;
}
http {
upstream myproject {
server ip_adder:80;
server ip_adder:8080;
}
server {
listen 80;
location / {
proxy_pass http://myproject;
}
}
}
I run servers in vps,
when i request to ip_adder:80 by browser directly i see response.
but i request to nginx to redirect upstream i see 502 badGateway response.
anyone can help me?
This problem is solved by this method;
Your upstream server ports are probably filtered or closed. Please refer to this link for further review
https://nmap.org/book/man-port-scanning-basics.html

Nginx Reverse Proxy upstream not working

I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

nginx "500 internal server error" on large request

I am sending a 14K request to my backend through nginx and I get the following error:
500 Internal Server Error
I am running nginx 1.6.2 and if I send my request directly to my backend, everything works fine and the request takes about 3-4 seconds round trip.
This is my nginx config:
$ cat /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
proxy_temp_path /tmp/nginx;
upstream my_servers {
server <server1>:9000 down;
server <server2>:9000 down;
server <server3>:9000 down;
server <server1>:9001;
server <server2>:9001;
server <server3>:9001;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://my_servers;
}
}
}
Any idea on what is going on? I can't be hitting any default timeouts at 3-4 seconds I assume?
BTW, when I tried looking at the access log file, it was empty.
The issue was related to permissions for client_body_temp_path as described here:
https://wincent.com/wiki/Fixing_nginx_client_body_temp_permission_denied_errors

What does upstream mean in nginx?

upstream app_front_static {
server 192.168.206.105:80;
}
Never seen it before, anyone knows, what it means?
It's used for proxying requests to other servers.
An example from http://wiki.nginx.org/LoadBalanceExample is:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
This means all requests for / go to the any of the servers listed under upstream XXX, with a preference for port 8000.
upstream defines a cluster that you can proxy requests to. It's commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing.
If we have a single server we can directly include it in the proxy_pass directive. For example:
server {
...
location / {
proxy_pass http://192.168.206.105:80;
...
}
}
But in case if we have many servers we use upstream to maintain the servers. Nginx will load-balance based on the incoming traffic, as shown in this answer.

Resources