NGINX: Proby pass problem in local netweork - nginx

I'm trying to use nginx as load balacer form my HIDS system (Wazuh). I've some agents that send logs from outside of my network and some from inside, throught port udp 1514.
From the agent outside i've no connection problem, but from inside they are unable to connect to the manager throught the udp port 1514. No firewall are enable on Nginx LB( Centos 7 machine by the way) and Selinux is disabled.
Can someone tell me how can i do to figure out whats wrong?
Here my nginx configuration:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 10000;
}
stream {
upstream master {
server 10.0.0.7:1515;
}
upstream mycluster {
hash $remote_addr consistent;
server 10.0.0.7:1514;
server 10.0.0.6:1514;
}
server {
listen 1515;
proxy_pass master;
}
server {
listen 1514 udp;
proxy_pass mycluster;
}
#error_log /var/log/nginx/error.log debug;
}

If you desire to configure an NGINX service to forward the Wazuh agent's events to the Wazuh manager server, I would recommend taking a look at the following documentation page that explains, step by step, how to achieve this using Linux: https://wazuh.com/blog/nginx-load-balancer-in-a-wazuh-cluster/
Your configuration seems to be valid. However, I would recommend making sure that your module is being applied, or applying this configuration directly to the Nginx configuration file. Also, make sure that you apply the configuration by restarting the service.

Related

Can't configure subdomains in Nginx

Its been 4 hours of struggle and internet digging and I can't seem to understand why this Nginx configuration doesn't work.
AIM:
I have two completely different projects which I would like to host on the same domain using subdomains. So project one would be one.example.com while project two has to be two.example.com. I have also set two different node js servers sitting on port 4000 and 4001 and would like to have project one routed to 4000 and project two to 4001.
server {
listen 80;
server_name one.example.com;
location / {
proxy_pass http://127.0.0.1:4000/;
}
}
server {
listen 80;
server_name two.example.com;
location / {
proxy_pass http://127.0.0.1:4001/;
}
}
Used Command: service nginx start
And the error I get
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
In order to fix my issue I had to go to /etc/nginx/nginx.conf and add this line to the top of the http server block: server_names_hash_bucket_size 64;

docker swarm mode nginx between 2 swarm clusters

I'm trying to create a proxy in front of a swarm cluster.
This proxy is inside of another swarm cluster, to provide HA.
This is the current structure:
Proxy cluster (ip range 192.168.98.100 ~ 192.168.98.102)
proxy-manager1;
proxy-worker1;
proxy-worker2;
App cluster (ip range 192.168.99.100 ~ 192.168.99.107)
app-manager1;
app-manager2;
app-manager3;
app-worker1;
app-worker2;
app-worker3;
app-worker4;
app-worker5;
When I configure nginx with the app-manager's IP address, the proxy redirection works good.
But when I configure nginx with the app-manager's hostname or DNS, the proxy service does not find the server to redirect.
This is the working config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server 192.168.99.100:8080;
server 192.168.99.101:8080;
server 192.168.99.102:8080;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}
Is that a good practice?
Or maybe I'm doing wrong?
You have to make sure that nginx can resolve the hostnames to ip addresses. Check this out for more information: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
Check how nginx is resolving the hostnames or check the hosts files on the nginx servers.
On a side note, I sometimes will run this for the local dns https://cr.yp.to/djbdns.html and then makes sure that all the nginx servers use that dns service running on a server. It is easier to configure djbdns than to keep all the host files updated.

Dynamically listen on an nginx port

I'd like to pass a port to Nginx to listen on dynamically. So I can write something like:
PORT=4567 nginx -c $PWD/nginx.conf
and then have an nginx config that looks something like:
http {
server {
listen $PORT;
}
}
and have nginx listen on the specified port. I tried compiling nginx with lua support, and writing:
events {
worker_connections 200;
}
env SERVER_PORT;
http {
server {
set_by_lua_block $sp {
return os.getenv("SERVER_PORT");
}
listen $sp;
root /Users/kevin/code/nginx-testing;
}
}
But this doesn't work, either; $sp doesn't get defined until the rewrite phase.
Are there any options here or am I resigned to rewriting the config file via sed or similar before starting nginx?
Kevin
The listen directive does not support nginx variable or ENV variable.
So it cannot listen on a nginx port dynamically.
Dynamical listen via ENV variable is technically feasible, you should modify nginx core.
But it cannot be implemented via nginx variable, nginx must listen on some specified port before receiving http requests.
(nginx variable system works on http request.)
You can write some script to modify "listen" directive before starting nginx, which is a not-so-good way to implement dynamic listen.

nginx best practices for reloading servers

I have a nginx config which has:
events {
worker_connections 1024;
}
http {
upstream myservers {
server server1.com:9000;
server server2.com:9000;
server server3.com:9000;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://myservers;
}
}
}
I need to reload the servers and the method I am using is to bring up the new servers on port 9001 and then do nginx -s reload with the following modification to the config:
upstream myservers {
server server1.com:9000 down;
server server2.com:9000 down;
server server3.com:9000 down;
server server1.com:9001;
server server2.com:9001;
server server3.com:9001;
}
Then I bring down the old servers. However, before I bring down the old servers, I need to make sure all workers that were handling requests to these old servers are done. How do I check this? Also, is this the best way to reload backend servers with the free version of nginx?

nginx directive is not allowed here in unicorn's example nginx.conf

I'm using nginx 1.4.1. After copying unicorn's example of nginx.conf, I found out the settings must be moved to different directives. I still couldn't manage to place the following settings in the nginx.conf file: worker_processes, user, pid and events block. When I place them as it is now, the log shows directive is not allowed here. What should I fix?
worker_processes 1;
user deployer sudo; # for systems with a "nogroup"
pid /run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
}
upstream abc {
...
}
server {
...
}
Update 1
I know about this post, but it's weird that whatever I am doing is not working. I couldn't find any docs in nginx.
The original example cannot be used directly, because the main configuration is at /etc/nginx/nginx.conf. /etc/nginx/nginx.conf has http directives, which includes the sites-enabled/* directives. The only changes to be made on /etc/nginx/nginx.conf are:
work_processes 4;
worker_connections 1024;
Also, remove text/html from it because it's already gzipped by default.
The end result of your nginx.conf in your app should have no http directives, just upstream and server.

Resources