NGINX UDP Proxy listening same port and decides on origin ip - nginx

I am currently trying to make a nginx proxy work where it pass to different ips depending on the origin.
stream {
server {
listen 1000 udp;
proxy_pass 10.0.0.2;
allow 10.0.0.3;
}
server {
listen 1000 udp;
proxy_pass 10.0.0.3;
allow 10.0.0.2;
}
}
obviously this does not work as I can not listen on the same port twice. I tried something with "if" but it is not allowed there. Any ideas? I just want to proxy the traffic between the two ips.

You need transparent proxy or some kind of packet filter or firewall, not nginx, since it is reverse proxy and not suitable for your task.

While I'm not sure you choose the right way to solve your task (unless you need some kind of load-balancing), however this this should be possible using several upstream blocks and the geo block:
stream {
upstream first_upstream {
server 10.0.0.2:1000;
}
upstream second_upstream {
server 10.0.0.3:1000;
}
upstream third_upstream {
server 10.0.0.4:1000;
}
geo $upstream_name {
10.0.0.0/24 first_upstream;
10.0.1.0/24 second_upstream;
default third_upstream;
}
server {
listen 1000 udp;
proxy_pass $upstream_name;
}
}
If you need a load-balancing, see the TCP and UDP Load Balancing article.

Related

Is it possible to redirect FTP requests to another IP using NGINX?

I have two linux VMs with IPs 192.168.1.10 - VM1 and 192.168.1.11 - VM2. NGINX is running in VM1. VM2 is ftp server. I can successfully upload files to 192.168.1.11:21.
What I am trying to achieve is, instead of using the IP of VM2, is it possible to use IP of VM1 to upload files using nginx?
EDIT
I am looking for something like below;
upstream ftp_server {
server 192.168.1.11:21 fail_timeout=0;
}
server {
}
I think you want to forward a TCP stream to another server.
So something like this should work for you:
stream {
upstream backend {
server 192.168.1.11:21;
}
server {
listen 21;
proxy_pass backend;
}
}

Nginx Consistent Hashing for Multiple Upstreams

The Upstream server is wowza. There are two upstreams
upstream wowza {
hash $arg_streamKey consistent;
server x.x.x.x:8087;
server x.x.x.y:8087;
}
upstream wowza_thumbnail {
hash $arg_streamKey consistent;
server x.x.x.x:8086;
server x.x.x.y:8086;
}
The first upstreams points to API and second points to Thumbnail URI.
I changed the hashKey to the query param thinking the hash will be based on the query param and it will resolve to the same server for both the upstreams but that not the case.
On some occasions, the second upstream resolve to a different server and I think that is due to the change in port.
Is there a way to make consistent hashing consistent for both the upstreams?
Any help would be appreciated.
Okay. I understood that what I am asking here is not feasible.
So instead of creating two upstream I created one and on the upstream server setup an Nginx proxy which proxy_pass both ports on the path pointing to a single port.
upstream wowza {
hash $arg_streamKey consistent;
server x.x.x.x:8081;
server x.x.x.y:8081;
}
Wowza 1 and Wowza 2
server {
listen 8081;
server_name _;
location /thumbnail {
proxy_pass http://localhost:8086;
}
location / {
proxy_pass http://localhost:8087;
}
}
This help me deal with only one upstream block pointing to port 8081.

Nginx: Can I `deny some-port` (not an IP) in a location?

In Nginx, can one somehow block or allow access from certain ports, in a location? Looking at the allow & deny docs it seems to me that they cannot be used for this purpose. Right? But is there no other way to do this?
Background:
In an Nginx virtual host, I'm allowing only a certain IP to publish websocket events:
server {
listen 80;
location /websocket/publish {
allow 192.168.0.123;
deny all;
}
However, soon the IP address of the appserver will we unknown, because everything will run inside Docker and I think I'll have no idea which ip a certain container will have.
So I'm thinking I could do this instead:
server {
listen 80;
listen 81;
location /websocket/publish {
# Let the appserver publish via port 81.
allow :81; # <–– "invalid parameter" error
# Block everything else, so browsers cannot publish via port 80.
deny all;
}
... other locations, accessible via port 80
And then have the firewall block traffic to port 81 from the outside world. But allow :81 doesn't work. Is there no other way? Or am I on the wrong track; are there better ways to do all this?
(As far as I've understood from the docs about the websocket Nginx plugin I use (namely Nchan) I cannot add the /websocket/publish endpoint in another server { } block that listens on port 81 only. Edit: Turns out I can just use different server blocks, because Nchan apparently ignores in which server block I place the config stuff, see: https://github.com/slact/nchan/issues/157. So I did that, works fine for me now. However would still be interesting to know if Nginx supports blocking a port in a location { ... }. )

How to reroute SFTP traffic via NGINX

I'm trying to setup an FTP subdomain, such that all incoming SFTP requests to (say) ftp.myname.com, get routed to a particular internal server, (say) 10.123.456 via port 22.
How do I use nginx to route this traffic?
I've already setup the SFTP server, and can SFTP directly to the server, say:
sftp username#123.456.7890, which works fine.
The problem is that when I setup nginx to route all traffic to ftp.myname.com, it connects, but the passwords get rejected. I have no problems routing web traffic to my other subdomains, say dev.myname.com (with passwords), but it doesn't work for the SFTP traffic:
server {
listen 22;
server_name ftp.myname.com;
return .............
}
How do I define the return sting to route the traffic with the passwords?
The connection is SFTP (via port 22).
Thanks
Aswering to #peixotorms: yes, you can. nginx can proxy/load balance http as well as tcp and udp traffic, see nginx stream modules documentation (at the nginx main documentation page) , and specifically the stream core module's documentation.
You cannot do this on nginx (http only), you must use something like HaProxy and a simple dns record for your subdomain pointing to the server ip.
Some info: http://jpmorris-iso.blogspot.pt/2013/01/load-balancing-openssh-sftp-with-haproxy.html
Edit:
Since nginx version 1.15.2 it's now possible to do that using the variable $ssl_preread_protocol. The official blog added post about how to use this variable for multiplexing HTTPS and SSH on the same port.
https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
Example of configuring SSH on an upstream block:
stream {
upstream ssh {
server 192.0.2.1:22;
}
upstream sslweb {
server 192.0.2.2:443;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" sslweb;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}

How to configure nginx to proxy another service serving http and https on different ports?

Use case:
Using nginx as a frontend for several websites / services running on both 80 and 443 (several virtual hosts).
Having service x running on localhost that serves http:8090 and https:8099
How do I need to configure nginx so people can access using only the name, without specifying the port.
This a fairly normal setup. Configure the hosts served directly on Nginx as normal. Since they need to listen on both 80 and 443, each host entry would have this in it:
server {
listen 80;
listen 443 ssl;
}
The Nginx SSL docs has the full details.
Then proxy traffic for one server{} definition to the backend service:
server {
server_name example.com;
location / { proxy_pass http://127.0.0.1:8090; }
}
You only need one proxy connection to the backend server, either 'http' or 'https'. If the connection between the two servers is secure, you can 'http', even for connections that arrive to nginx over https. This might be appropriate if the service is on the same machine. Otherwise, all the traffic could be proxied through https if the connection between nginx and the backend server needs to be secured.
We use the following with our host:
http {
server {
server_name ~^(www\.)?(?<domain>.+)$;
listen *:80;
location / {
proxy_pass $scheme://<origin>$uri$is_args$args;
include basic-proxy-settings.conf;
}
}
server {
server_name ~^(www\.)?(?<domain>.+)$;
listen *:443 ssl;
location / {
proxy_pass $scheme://<origin>$uri$is_args$args;
include basic-proxy-settings.conf;
}
include ssl-settings.conf;
}
}
This allows our upstream proxy to talk to our origin server over HTTP when a request is made by a client for an insecure resource, and over SSL/HTTPS when a request is made for a secure one. It also allows our origin servers to be in charge of forcing redirects to secure connections, etc.
Next time, why not provide a code sample detailing what you've tried, what has worked, and what hasn't?

Resources