Load on a server is high using Nginx - nginx-location

I have a server which continuously sends http requests to a Nginx server. Nginx will upstream it to 4 servers. But i am seeing a problem where load balancing is not working well as one particular server receives 50% of requests. My nginx config has below.
upstream cmdc {
server cmdc2b:5600 max_fails=3 fail_timeout=30s;
server cmdc2a:5600 max_fails=3 fail_timeout=30s;
server cmdc1d:5600 max_fails=3 fail_timeout=30s;
server cmdc1c:5600 max_fails=3 fail_timeout=30s;
keepalive 30;
}
Can someone help me here? Does any other parameter affects this ?

Related

Nginx UDP load balance not notifying when one of the servers is down

I use Nginx (not Nginx plus) and FluentBit for one scenario.
In fact, the requests are sent to the UDP port in Nginx, and then Nginx sends them to Fluentbit in a round-robin so all requests are proxied to the server fluentbit_upstreams.
FluentBit does not return anything by default, so Nginx cannot notice that any of the servers are down.
I used fail_timeout and max_fails but it didn't help me.
pstream fluentbit_upstreams {
server fluentBitA.dev:5555 weight=1 fail_timeout=30s max_fails=1;
server fluentBitB.dev:6666 weight=1 fail_timeout=30s max_fails=1;
}
server {
listen 13149 udp;
proxy_pass fluentbit_upstreams;
proxy_responses 1;
error_log /var/log/nginx/udp.log;
}
How can this problem be solved? how Nginx can notice that one of the servers is down

How can dynamically create upstream block with Openresty and Lua?

I need to create an upstream block in the default.conf dynamically with Openresty and Lua.
upstream my_gateway{
for server in #SERVERS_ENV:
server xxx.xxx.xxx.xx:yyyy max_fails=3 fail_timeout=30s;
}
ngx.balancer is what you are looking for.

does nginx support tcp and http proxy on single instance?

I have a requirement where I have a single nginx instance to act as a VIP for http requests and also the mysql db cluster.
Is it possible to put the same configs under one host? TCP syntax looks different than the http. Please help me with some sample config which will work.
This worked for me.
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
http {
# http content
}
upstream servers{
server server1;
server server2 backup;
}
}
stream {
upstream mygroup {
least_conn;
server db_master:3309;
server db_slave:3309;
}
server {
listen 3309;
proxy_pass mygroup;
proxy_timeout 5s;
proxy_connect_timeout 1s;
}
}

how to proxy RMI call with nginx

does anyone know how to proxy RMI with nginx?
Nginx v1.9+
my current nginx server block.
stream {
upstream QA1{
server 10.168.85.39:30900;
}
upstream QA2 {
server 10.51.67.17:30900;
}
server {
listen 30900;
proxy_pass QA1;
}
server {
listen 30901;
proxy_pass QA2;
}
}
I'm getting a timeout error on the client-side
My current solution is to convert RMI to HTTP, and proxy HTTP with Nginx.

502 bad gateway on backend servers (nginx upstream module)

Hi have an 502 bad gateway problem when upstream server connects to NGINX backend servers (nodes).
if I use NGINX on backend servers then I got 502 bad gateway, if I use apache web server then no errors. doesn't matter if I use IPs instead of domains example server 192.101.876.76:8081
nginx.conf
http {
upstream appserver {
server backend1.example.com; # with NGINX web server gives an 502 bad gateway when connected, with apache works fine
server backend2.example.com; # with NGINX web server gives an 502 bad gateway when connected, with apache works fine
}
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://appserver;
}
}
**If I use Apache web server on backend1.example.com and backend2.example.com then it works. But I would like to use nginx because its more reliable and faster than apache.
Why it don't work with NGINX backend server?

Resources