NGINX transparent TCP proxy - nginx

I have an ELK stack. In front of both Logstash hosts, I set up two NGINX loadbalancers as transparent proxies.
UDP traffic is working as a charm.
TCP works with the config:
stream {
upstream syslog {
server sapvmlogstash01.sa.projectplace.com:514;
server sapvmlogstash02.sa.projectplace.com:514;
}
server {
listen 514;
proxy_pass syslog;
}
}
But I get as source_ip and source_host the LB instead of the input server's IP.
Setting the same adding proxy_bind $remote_addr transparent; doesn't work, throwing a timeout.
*1 upstream timed out (110: Connection timed out) while connecting to upstream, client: $SOURCEHOST_IP, server: 0.0.0.0:514, upstream: "$LOGSTASH_IP:514", bytes from/to client:0/0, bytes from/to upstream:0/0
I tried setting up TPROXY from here:
https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Logstash host:
route add default gw $NGINX_IP
route del default gw $DEFAULT_GW
NGINX host:
# Following nginx how-to
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p udp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-xmark 0x1/0xffffffff
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -s $LOGSTASH_IP/24 --sport 514 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 0
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
# Enabling Upstream Servers to Reach External Servers
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
But still failing like before with the Timeout.
What is missing to get a transparent TCP host?

The official doc said:
proxy_bind $remote_addr transparent;
In order for this parameter to work, it is usually necessary to run nginx worker processes with the superuser privileges. On Linux it is not required (1.13.8) as if the transparent parameter is specified, worker processes inherit the CAP_NET_RAW capability from the master process. It is also necessary to configure kernel routing table to intercept network traffic from the proxied server.
FYI: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/

Related

IPTable rules to restrict eth1 access to ports 80 and 443

I have a service listening to customer traffic on ports 80 and 443 of eth1. The servers hosting my service also host other admin/privileged access content on eth0 and localhost
I am trying to setup iptable rules to lock down eth1 on servers which is on same network as clients (block things like ssh through eth1/ accessing internal services running on port 9904 etc.) I also want to make sure that the rules dont forbid regular access to eth1:80 and eth1:443. I have come up with below rules but wanted to review with iptable gurus on possible issues with this rule.
-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth1 -j DROP
Do the rules above suffice
How does above differ from the rules found when googling
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j DROP
-A INPUT -i eth1 -p tcp -j ACCEPT
-A INPUT -i eth1 -j DROP
thanks i got this answered in https://serverfault.com/questions/834534/iptable-rules-to-restrict-eth1-access-to-ports-80-and-443 , adding it here for completeness
The first set of rules first allow all incoming packets on your ports
80 and 443. Then it drops ALL other incoming packets (except those
already accepted).
The second set of rules first allow all incoming packets on ports 80
and 443. Then it drops incoming connections (excluding 80 and 443 that
are already accepted), which are packets with only the SYN flag set.
Then it allows all incoming packets.
The difference here is what happens to your OUTGOING connections. In
the first ruleset, if you attempt to connect to another server, any
packets that server sends in response will be dropped so you will
never receive any data. In the second case, those packets will be
allowed since the first packet from the remote server will have both
SYN and ACK set and therefore pass the SYN test, and any following
packets will not have SYN set at all, and therefore pass the test.
This has been traditionally done using conntrack which requires the
kernel to keep track of every connection in the firewall, with a
command like
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
that matches the incoming packet either to an existing connection, or
a connection related to some other existing connection (eg FTP data
connections). If you aren't using FTP or other protocols that use
multiple random ports, then the second ruleset achieves basically the
same result without the overhead of tracking and inspecting these
connections.

google cloud Forwarding rules is very slow

new to setting up a load balancer:
I am working with the google compute engine.
Set up 3 servers running on 3 different ports: 5010, 5011 and 5012.
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 -j REDIRECT --to-port 5010/11/12.. on each specific server
set up a health check to port 8080 . let's call it example-health-check
set up a target pool that contains the health check and all the 3 instances. let's call it example-target-pool
Set up a forwarding rule with tcp:5010-5012 and linked it target pool to the example-target-pool
when I go to the lb ip in each one of the ports the connection is very weird. it works but very slow in most of the requests but once in a while a request is very fast to pass..
any ideas ?

Can access to my server from LAN but not from NAT

I have been trying to deploy a home-made server. My network consists of a router (Comtrend brand) and 2 pcs (A server laptop connected to eh0 and a netbook connected to WiFi).
The problem is that everytime I try to access to my external public IP I'm redirected to my routers internet address (192.168.1.1).
But if I access with directly with 192.168.1.132 I can see all my services published and use all the protocols. (http, ssh, etc).
What could I do? Is it a problem in the server configuration?
Configuration:
My server's ip is always 192.168.1.132
My laptop receives diferent internal ips but this is not important
My router has a dynamic ip. Let's say X.X.X.X.
Things I've already tried:
1.
I have opened ports in my router. Right now I have:
http 80 80 TCP 80 80 **192.168.1.132** ppp0.1
ssh 22 22 TCP 22 22 192.168.1.132 ppp0.1
2.
I tried with IPTABLES by adding the two next rules:
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to 192.168.1.132:80
iptables -A FORWARD -p tcp -i eth0 -d 192.168.1.132 --dport 80 -j ACCEPT
Then:
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
You may need to change the router's http management port to some port other than port 80 (like, port 8080 for example), in order to get the port forwarding to work, so that it forwards http requests on port 80 to your server at 192.168.1.132.

Iptables rules for nginx with php-fpm

I am setting up iptables rules on the server where nginx and php-fpm are running. I have allow both 80 and 443 ports but as I see there are also addiitonal connections to higher ports that are blocked.
Sample output of
netstat -anpn | grep -v ":80"
tcp 0 1 10.0.0.1:8109 10.1.2.24:29837 SYN_SENT 19834/nginx: worker
tcp 0 1 10.2.3.45:31890 10.0.0.1:26701 SYN_SENT 17831/nginx: worker
10.0.0.1 is server IP, others are clients.
My iptables rules:
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
Can someone explain:
Why do nginx uses ports different from standard 80 and 443.
What is this additional ports range?
How to properly allow connections to nginx with iptables?
Thanks in advance!
Nginx will typically perform internal redirects when processing a request and this will establish connections on high numbered ports. I do not believe you can find this range.
Here is what I see for example:
tcp 0 0 192.168.0.126:80 0.0.0.0:* LISTEN 9432/nginx: worker
tcp 0 0 192.168.0.126:80 192.168.0.177:62950 ESTABLISHED 9432/nginx: worker
tcp 0 0 192.168.0.126:80 192.168.0.177:62949 ESTABLISHED 9432/nginx: worker
tcp 0 0 192.168.0.126:80 192.168.0.177:62947 ESTABLISHED 9432/nginx: worker
unix 3 [ ] STREAM CONNECTED 29213 9432/nginx: worker
The reason your firewall rules work is because you:
Have opened the required ports that your Nginx server listeners need (i.e. 80 and 443)
You have included the following firewall rule that allows all requests to localhost (127.0.0.1) so Nginx internal redirects that open high numbered ports are not blocked:
iptables -A INPUT -i lo -j ACCEPT
So to answer your questions:
Nginx server listeners can listen to any port you like not just 80 and 443. Why it uses additional ports is for internal redirects and as such an aspect of the implementation.
I do not believe you can find this range. In fact I would doubt any code would ask the system to utilize a certain port but rather would ask the OS for a high numbered unused port.
You may not have realized it but the firewall rules you implemented should work fine.
I use PHP-FPM with Nginx as well. I block all ports except 22/80/443 in iptables and haven't experienced any issues with connectivity. I examined my own netstat and it looks identical to your output. Are you sure your iptables rules are correct? Could you post the output of sudo iptables -L

HTTP and HTTPS port

I have created a J2EE application that runs on GlassFish, HTTPS enabled. When the user typed http: //www.mydomain.com:8080/app, it will be redirected to https: //www.mydomain.com:8181/app/login.
However, when I see in some of the websites, it can actually redirected to something like https: //www.mydomain.com/app/login (without the HTTPS port 8181). Does this means that the server is running both HTTP and HTTPS on port 80?
How to configure this on GlassFish 3.1?
Non-root user should not use ports below 1024.
It is better to do port forwarding from 80 to 8080 and 443 (https default) to 8181.
Execute this as root:
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8181
Need to make this permanent:
iptables-save -c > /etc/iptables.rules
iptables-restore < /etc/iptables.rules
and call during startup, vi /etc/network/if-pre-up.d/iptablesload
#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0
You can also configure it in the admin web gui under:
Configuration -> Server Config -> Network Config -> Network Listeners
Just to give out more details on alexblum's answer, when you login into the Glassfish Admin panel, go to Configurations -> server-config -> Network Listeners in Network Config.
Then click on New to add a new listener.
On the new listener page, just select 80 as your port and put 0.0.0.0 as your IP.
Select tcp as your Transport and use http-thread-pool as your Thread Pool
Save and Restart your Glassfish instance.
Thats what worked for me anyways.
The default port for HTTP is 80. When you access a URL: http://www.example.com/ you are connecting to www.example.com:80.
The default port for HTTPS is 443. When you access a URL: https://www.example.com/ you are connecting to www.example.com:443.
(See List of port numbers)
(See configuration of GlassFish to use other ports)

Resources