I'm trying to create a proxy in front of a swarm cluster.
This proxy is inside of another swarm cluster, to provide HA.
This is the current structure:
Proxy cluster (ip range 192.168.98.100 ~ 192.168.98.102)
proxy-manager1;
proxy-worker1;
proxy-worker2;
App cluster (ip range 192.168.99.100 ~ 192.168.99.107)
app-manager1;
app-manager2;
app-manager3;
app-worker1;
app-worker2;
app-worker3;
app-worker4;
app-worker5;
When I configure nginx with the app-manager's IP address, the proxy redirection works good.
But when I configure nginx with the app-manager's hostname or DNS, the proxy service does not find the server to redirect.
This is the working config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server 192.168.99.100:8080;
server 192.168.99.101:8080;
server 192.168.99.102:8080;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}
Is that a good practice?
Or maybe I'm doing wrong?
You have to make sure that nginx can resolve the hostnames to ip addresses. Check this out for more information: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
Check how nginx is resolving the hostnames or check the hosts files on the nginx servers.
On a side note, I sometimes will run this for the local dns https://cr.yp.to/djbdns.html and then makes sure that all the nginx servers use that dns service running on a server. It is easier to configure djbdns than to keep all the host files updated.
Related
I have set up a wordpress-website inside a docker container which works perfectly fine inside the local network.
To access the webserver from outside the internal network I am using an nginx reverse proxy (docker as well). Unfortunately I have no clue why that is. I have a bunch of other services hosted with no problem.
When I change the internal IP to another IP address where a server is running it works so I guess the problem is related to wordpress.
Here my nginx-config-file:
server {
set $forward_scheme http;
set $server "192.168.2.2";
set $port 8000;
listen 80;
server_name mydomain.com;
access_log /data/logs/proxy-host-12_access.log proxy;
error_log /data/logs/proxy-host-12_error.log warn;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I am thankful for any advice!
I'm trying to use nginx as load balacer form my HIDS system (Wazuh). I've some agents that send logs from outside of my network and some from inside, throught port udp 1514.
From the agent outside i've no connection problem, but from inside they are unable to connect to the manager throught the udp port 1514. No firewall are enable on Nginx LB( Centos 7 machine by the way) and Selinux is disabled.
Can someone tell me how can i do to figure out whats wrong?
Here my nginx configuration:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 10000;
}
stream {
upstream master {
server 10.0.0.7:1515;
}
upstream mycluster {
hash $remote_addr consistent;
server 10.0.0.7:1514;
server 10.0.0.6:1514;
}
server {
listen 1515;
proxy_pass master;
}
server {
listen 1514 udp;
proxy_pass mycluster;
}
#error_log /var/log/nginx/error.log debug;
}
If you desire to configure an NGINX service to forward the Wazuh agent's events to the Wazuh manager server, I would recommend taking a look at the following documentation page that explains, step by step, how to achieve this using Linux: https://wazuh.com/blog/nginx-load-balancer-in-a-wazuh-cluster/
Your configuration seems to be valid. However, I would recommend making sure that your module is being applied, or applying this configuration directly to the Nginx configuration file. Also, make sure that you apply the configuration by restarting the service.
I am trying to protect the URL of my Kibana server with a password.
If I type http://192.168.1.2 in the browser, I am getting prompted for a username/password, but if I query the port 5601 directly via http://192.168.1.2:5601 then I can bypass the nginx proxy auth.
Note that both nginx and Kibana run on the same server.
I tried different combinations of "localhost" "0.0.0.0" or "127.0.0.1" as the listening source address but none of them worked. I can still easily bypass the proxy.
What am I doing wrong?
here's my /etc/nginx/nginx.conf file:
server {
listen 192.168.1.2:80;
server_name 192.168.1.2;
location / {
proxy_pass http://192.168.1.2:5601;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
NGINX only listens on port 80 and does not prevent access to your application on port 5601. You should instead use a firewall to block access to the port itself. You could:
Place your server behind a firewall such as a router (blocks out all external network requests)
Install a firewall, like UFW, on the server itself.
I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}
My question is similar to this question but with only one domain.
Is it possible to run multiple docker containers on the same server, all of them on port 80, but with different URL paths?
For example:
Internally, all applications are hosted on the same docker server.
172.17.0.1:8080 => app1
172.17.0.2:8080 => app2
172.17.0.3:8080 => app3
Externally, users will access the applications with the following URLs:
www.mydomain.com (app1)
www.mydomain.com/app/app2 (app2)
www.mydomain.com/app/app3 (app3)
I solved this issue with an nginx reverse proxy.
Here's the Dockerfile for the nginx container:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
And this is the nginx.conf:
http {
server {
listen 80;
location / {
proxy_pass http://app1:5001/;
}
location /api/ {
proxy_pass http://app2:5000/api/;
}
}
}
I then stood up the nginx, app1, and app2 containers inside the same docker network.
Make sure to include the trailing / in the location and proxy paths, otherwise nginx will return a '502: Bad Gateway'.
All requests go through the docker host on port 80, which hands them off to the nginx container, which then forwards them onto the app containers based on the url path.