I have 3 different VMs where 2 of them are running an application on Kubernetes (Minikube), on NodePort.
On the third server, I'm trying to use Nginx as a LoadBalancer but I cannot seem to reach the servers.
For that I am following the own Nginx guide using something like:
(I can access the application using NodePort on my PC)
http {
upstream backend {
server 192.168.1.1:31200;
server 192.168.1.2:31201;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
However when I connect to the LoadBalance, it cannot find the servers.
Am I configuring Nginx in a wrong way or by using a local server like Minikube it is not possible to Load Balance this way?
Turns out I had configured the DNS server wrongly, now it works as expected.
Related
I have two linux VMs with IPs 192.168.1.10 - VM1 and 192.168.1.11 - VM2. NGINX is running in VM1. VM2 is ftp server. I can successfully upload files to 192.168.1.11:21.
What I am trying to achieve is, instead of using the IP of VM2, is it possible to use IP of VM1 to upload files using nginx?
EDIT
I am looking for something like below;
upstream ftp_server {
server 192.168.1.11:21 fail_timeout=0;
}
server {
}
I think you want to forward a TCP stream to another server.
So something like this should work for you:
stream {
upstream backend {
server 192.168.1.11:21;
}
server {
listen 21;
proxy_pass backend;
}
}
I'm running two apps on a raspberry pi on my local network (at 192.168.1.95). I want to access them through my browser like this:
192.168.1.95:3000 = App One
192.168.1.95:3001 = App Two
So my server blocks look like this:
# App One
server {
listen *:3000;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
# App Two
server {
listen *:3001;
location / {
proxy_pass http://127.0.0.1:3001;
}
}
But, on my laptop, in the browser, when I navigate to 192.168.1.95:3000 it just spins forever. I restarted NGINX on my pi, and it didn't report any errors. What is it missing?
Edits:
I checked access logs and error logs. Nothing is there which makes
me think my 192.168.1.95:3000 requests are not even getting to
nginx.
I'm open to other solutions. I would go for a subdomain, but I don't
want DNS involved. It's a single internal IP with multiple websites and tools on it. I don't think
subdomain.192.168.1.95 works.
Solution: I am an idiot. I had UFW firewall on my pi blocking everything except 22, 80, 443. I exposed the ports I wanted and I'm good to go, without even needing nginx.
My issue is that I have a web server running on port 80. I want to use nginx proxy (not the ingress) bto redirect the connection. I want to use link wwww.example.com. How should I tell nginx to proxy the connection on wwww.example.com (which is a different app). I tried using service with load balancer but it changes the hostname ( to some aws link) I need it to be exactly wwww.example.com.
If I understood your request correctly, you may just use return directive in your nginx config
server {
listen 80;
server_name www.some-service.com;
return 301 $scheme://wwww.example.com$request_uri;
}
If you need something more complex check this doc or this
I'm trying to setup an FTP subdomain, such that all incoming SFTP requests to (say) ftp.myname.com, get routed to a particular internal server, (say) 10.123.456 via port 22.
How do I use nginx to route this traffic?
I've already setup the SFTP server, and can SFTP directly to the server, say:
sftp username#123.456.7890, which works fine.
The problem is that when I setup nginx to route all traffic to ftp.myname.com, it connects, but the passwords get rejected. I have no problems routing web traffic to my other subdomains, say dev.myname.com (with passwords), but it doesn't work for the SFTP traffic:
server {
listen 22;
server_name ftp.myname.com;
return .............
}
How do I define the return sting to route the traffic with the passwords?
The connection is SFTP (via port 22).
Thanks
Aswering to #peixotorms: yes, you can. nginx can proxy/load balance http as well as tcp and udp traffic, see nginx stream modules documentation (at the nginx main documentation page) , and specifically the stream core module's documentation.
You cannot do this on nginx (http only), you must use something like HaProxy and a simple dns record for your subdomain pointing to the server ip.
Some info: http://jpmorris-iso.blogspot.pt/2013/01/load-balancing-openssh-sftp-with-haproxy.html
Edit:
Since nginx version 1.15.2 it's now possible to do that using the variable $ssl_preread_protocol. The official blog added post about how to use this variable for multiplexing HTTPS and SSH on the same port.
https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
Example of configuring SSH on an upstream block:
stream {
upstream ssh {
server 192.0.2.1:22;
}
upstream sslweb {
server 192.0.2.2:443;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" sslweb;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
How to load balance docker containers running a simple web application?
I have 3 web containers running in a single host. How do I load balance my web containers?
Put a load balancer, such as haproxy or nginx can even do the job.
Decent Haproxy Documentation
Nginx Howto
Either way, put the load balancer on the host or on a different server that can access the exposed ports on the containers. Nginx will probably be simpler for your needs.
To setup basic nginx load balancing:
http {
upstream myapp1 {
server CONTAINER_APP0_IP:PORT;
server CONTAINER_APP1_IP:PORT;
server CONTAINER_APP2_IP:PORT;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}