External Load Balancer Nginx to Ingress on k8s - nginx

Good day!
The question may seem strange, but I am trying to understand the situation and the possibilities of its implementation. I will be grateful for the answer and your time.
Components:
Virtual machine with reverse proxy server as nginx
k8s load balancer input load balancer
running on, running on 80,443 ports in a k8s cluster
dns entry for *.k8s.test.lab
Can I use the following construction:
convert a VM running to a load balancer directly nginx, convert a VM running to a load balancer directly, avoiding an ingress load balancer in a k8s cluster?
Can I get content from the site.test.laboratory? If so, where to make changes for this?
After making the following configuration on an external nginx, I get the error 502 faulty gateway
[block scheme]
upstream loadbalancer {
server srv-k8s-worker0.test.lab:80;
server srv-k8s-worker1.test.lab:80;
server srv-k8s-worker2.test.lab:80;
}
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://loadbalancer;
}
}
Also, for verification, I created dns records for the
srv-k8s-worker0.test.lab
srv-k8s-worker1.test.lab
srv-k8s-worker2.test.lab
In general, an answer is needed about the possibility of this configuration as a whole and whether it makes sense. Node port is not an option to use.
The only option that allows you to change the domain names that I managed to do.
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://site.k8s.test.lab;
}
}
Thanks!

Related

confusing on create nginx file site-available

Hi I created an instance in ec2 and connected elastic ip to it,
in the instance itself I installed a server that runs with this url http://172.17.0.2:5000/api/v1.0/,
the elastic ip is in this address (for example 54.193.250.150),
I have now installed nginx and I am trying to do the routing from my PC to a server sitting in ec2 ,
so i tried to create site-available file but i this is not working for me i will be very glad to get help with this issue.
server {
listen 80;
server_name 54.193.250.150 ;
location api/v1.0
{
proxy_pass http://172.17.0.2:5000/api/v1.0;
}
}

NGINX TCP Load balancer server port is not working

I'm trying to configure nginx TCP Load balancing. I have 3 applicatios, running on 5001-5003 ports on different machine.
If I configure TCP load balancer as follows:
stream {
upstream centos-01 {
server 10.0.0.84:5001;
server 10.0.0.84:5002;
server 10.0.0.84:5003;
}
server {
listen 5001;
proxy_pass centos-01;
}
}
Executing application by calling load_balancer_machine_ip:5001 works as expected. However, if I want nginx to listen to different port:
server {
listen 9090;
proxy_pass centos-01;
}
I get socket timeouts and it would seem as if nginx is not listening to this port (I'm quite frankly lost as to what the problem is).
Calling load_balancer_machine_ip:9090 produces socket timeout/not being able to connect.
Any ideas?

How to set up nginx setting to distribute different servers from request pointing different domain?

I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.

How do I setup NGINX configuration to reverse proxy to multiple containers using different proxies and site files

I am working on setting up our development environment using Vagrant, Docker and NGINX (among other tools).
We have a common microservice architecture and we are using a container per microservice.
As a proof of concept, I am using two test microservices to try to work out the development environment, a and b.
We are running three containers (nginx,a,b) for the purpose of my proof of concept.
Using reverse proxies in NGINX, I want to be able to use the following proxies.
http://localhost:8181/a/ proxies request to the IP address of container 'a'
http://localhost:8181/b/ proxies request to the IP address of container 'b'
To do this, I want to defer the management of each of the services nginx configuration to that service itself. Hopefully, this would allow each of the service to add/remove/modify entries to a /etc/nginx/sites-available volume that is shared with the containers and the host OS.
For example, I would think that I would need the following structure in NGINX
/etc/nginx
- nginx.conf
- sites-available
- a
- b
- sites-enabled
- a (sym link to sites-available/a)
- b (sym link to sites-available/b)
The nginx.conf file looks like the following:
worker_processes 4;
events {
worker_connections 1024;
}
http {
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Assuming that when I pull the IP addresses of the containers for a and b I get the following
a -> 172.17.0.1
b -> 172.17.0.2
and that I have container a exposing port 1234 and container b exposing port 5678, what should the a and b files look like under sites-available?
I tried the following and similar variations, but can't seem to get it to use BOTH /a and /b proxies.
excerpt from /etc/nginx/sites-available/a
server {
listen 80;
location /a/ {
proxy_pass http://172.17.0.1:1234/;
}
}
excerpt from /etc/nginx/sites-available/b
server {
listen 80;
location /b/ {
proxy_pass http://172.17.0.2:5678/;
}
}
For example, using this configuration the following would be the results
http://localhost:8181/a/ proxies to container 'a' just fine
http://localhost:8181/b/ gives a connection refused

Restricting direct access to port, but allow port forwarding in Nginx

I'm trying to restrict direct access to elasticsearch on port 9200, but allow Nginx to proxy pass to it.
This is my config at the moment:
server {
listen 80;
return 301;
}
server {
listen *:5001;
location / {
auth_basic "Restricted";
auth_basic_user_file /var/data/nginx-elastic/.htpasswd;
proxy_pass http://127.0.0.1:9200;
proxy_read_timeout 90;
}
}
This almost works as I want it to. I can access my server on port 5001 to hit elasticsearch and must enter credentials as expected.
However, I'm still able to hit :9200 and avoid the HTTP authentication, which defeats the point. How can I prevent access to this port, without restricting nginx? I've tried this:
server {
listen *:9200;
return 404;
}
But I get:
nginx: [emerg] bind() to 0.0.0.0:9200 failed (98: Address already in use)
as it conflicts with elasticsearch.
There must be a way to do this! But I can't think of it.
EDIT:
I've edited based on a comment and summarised the question:
I want to lock down < serverip >:9200, and basically only allow access through port 5001 (which is behind HTTP Auth). 5001 should proxy to 127.0.0.1:9200 so that elasticsearch is accessible only through 5001. All other access should 404 (or 301, etc).
add this in your ES config to ensure it only binds to localhost
network.host: 127.0.0.1
http.host: 127.0.0.1
then ES is only accessible from localhost and not the world.
make sure this is really the case with the tools of your OS. e.g. on unix:
$ netstat -an | grep -i 9200
tcp4 0 0 127.0.0.1.9200 *.* LISTEN
in any case I would lock down the machine using the OS firewall to really only allow the ports you want and not only rely on proper binding. why is this important? because ES also runs its cluster communication on another port (9300) and evil doers might just connect there.

Resources