Good day!
The question may seem strange, but I am trying to understand the situation and the possibilities of its implementation. I will be grateful for the answer and your time.
Components:
Virtual machine with reverse proxy server as nginx
k8s load balancer input load balancer
running on, running on 80,443 ports in a k8s cluster
dns entry for *.k8s.test.lab
Can I use the following construction:
convert a VM running to a load balancer directly nginx, convert a VM running to a load balancer directly, avoiding an ingress load balancer in a k8s cluster?
Can I get content from the site.test.laboratory? If so, where to make changes for this?
After making the following configuration on an external nginx, I get the error 502 faulty gateway
[block scheme]
upstream loadbalancer {
server srv-k8s-worker0.test.lab:80;
server srv-k8s-worker1.test.lab:80;
server srv-k8s-worker2.test.lab:80;
}
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://loadbalancer;
}
}
Also, for verification, I created dns records for the
srv-k8s-worker0.test.lab
srv-k8s-worker1.test.lab
srv-k8s-worker2.test.lab
In general, an answer is needed about the possibility of this configuration as a whole and whether it makes sense. Node port is not an option to use.
The only option that allows you to change the domain names that I managed to do.
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://site.k8s.test.lab;
}
}
Thanks!
In my use case, I need to setup load balancer which can be nginx or something like that which supports TCP load balancing which will be connected to my backend service.
I want to do this in active/passive manner. I can have 5 load balancers instances on docker env and 5 backend service instances lets say NFS (maybe not on docker env).
Now I want my lb1 (load balancer 1) to route the request to nfs1 only unless it is down then route request to nfs2 or nfs3 so on.
lb1 ----- nfs1
lb2 ----- nfs2
:
:
lb5 ----- nfs5
I have tried it with nginx but it only supports 2 servers in active/passive mode with backup keyword.
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server 172.17.0.5:2049;
server 172.17.0.7:2049 backup;
}
server {
listen 80;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_backend;
}
}
Any help will be great.
By using Haproxy able to solve my issue.
defaults
mode tcp
frontend haproxy
bind *:80
mode tcp
timeout client 1s
default_backend nfs
backend nfs
mode tcp
timeout connect 1s
timeout server 1s
server nfs1 172.17.0.7:2049 check
server nfs2 172.17.0.5:2049 check backup
server nfs3 172.17.0.8:2049 check backup
credits: https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/
I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.
I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP
I'm trying to restrict direct access to elasticsearch on port 9200, but allow Nginx to proxy pass to it.
This is my config at the moment:
server {
listen 80;
return 301;
}
server {
listen *:5001;
location / {
auth_basic "Restricted";
auth_basic_user_file /var/data/nginx-elastic/.htpasswd;
proxy_pass http://127.0.0.1:9200;
proxy_read_timeout 90;
}
}
This almost works as I want it to. I can access my server on port 5001 to hit elasticsearch and must enter credentials as expected.
However, I'm still able to hit :9200 and avoid the HTTP authentication, which defeats the point. How can I prevent access to this port, without restricting nginx? I've tried this:
server {
listen *:9200;
return 404;
}
But I get:
nginx: [emerg] bind() to 0.0.0.0:9200 failed (98: Address already in use)
as it conflicts with elasticsearch.
There must be a way to do this! But I can't think of it.
EDIT:
I've edited based on a comment and summarised the question:
I want to lock down < serverip >:9200, and basically only allow access through port 5001 (which is behind HTTP Auth). 5001 should proxy to 127.0.0.1:9200 so that elasticsearch is accessible only through 5001. All other access should 404 (or 301, etc).
add this in your ES config to ensure it only binds to localhost
network.host: 127.0.0.1
http.host: 127.0.0.1
then ES is only accessible from localhost and not the world.
make sure this is really the case with the tools of your OS. e.g. on unix:
$ netstat -an | grep -i 9200
tcp4 0 0 127.0.0.1.9200 *.* LISTEN
in any case I would lock down the machine using the OS firewall to really only allow the ports you want and not only rely on proper binding. why is this important? because ES also runs its cluster communication on another port (9300) and evil doers might just connect there.