I have multiple HTTP servers running on the same machine. Only the nginx is listening to the HTTP port and forwards the requests to the other programs.
Now I'm adding a service that needs to receive post requests directly (without them being buffered). I already read that this isn't possible, though the posts are about a year old so I'm hoping there's a way to accomplish this in nginx 1.5.
Is there another way to have multiple HTTP server running on the same machine?
Edit: Every server has to answer requests from the http port. Which server handles the request is determined by the Hostname in the URL.
When your server has multiple IP's, You can bind services to explictly selected IP instead of default '*' or 0.0.0.0
When your clients can be segregated by their IP's, You can bind services to different ports and route packets using iptables:
iptables -A INPUT -p tcp --dport 80 -s 10.20.30.0/24 -j REDIRECT --to-port 81
Iptables can check not only headers, but also content via "-m string" extension.
You can have multiple processes on the same machine, call them HTTP servers, or anything else, the only "limit" is that they cannot listen in on the same port, they will each need to listen on a different port to work.
Otherwise they will complain that the port is already in use and "die".
Related
I have hosted magento2 website with Nginx, SSL termination, and varnish cache. Varnish cache is running on port 8080 and the Magento2 website is hosted on Nginx port 8081. Http and Https traffic is accepted by the same Nginx and forwarded to the varnish cache(SSL terminated).
NGINX Varnish Magento2 all are running in the same server
I have two questions,
If I tried to access the magento2 website which is running on port 8081, directly from the internet, it bypasses the SSL termination and directly connects to the website. How can I restrict that?
When configuring magento2 baseurl, If I want to host it on a different port rather than the default 80 port, Do I need to give the port number at the baseurl configuring step? eg:- php bin/magento setup:install --base-url=http://www.example.com:8081
Assuming you want to block the port from the public internet, you have multiple options. Assuming you have SSH access, you can block the port with iptables:
/sbin/iptables -A INPUT -p tcp --destination-port 8081 -j DROP
/sbin/service iptables save
Assuming you're using a non-standard HTTP port (not 80 or 443), yes, you would need to specify that in the configuration.
nginx shouldn't be listening on 8081 to the outside world to begin with. You probably need something like
server_name localhost;
in your nginx configuration
I am a complete beginner when it comes to networking and I am trying to set up a TCP tunnel on my machine using pagekite. I want to route all traffic from a TCP address to a port on my localhost, let's say 8080. I would then start a handler on localhost:8080 to deal with the incoming traffic. I can get this to work with ngrok simply by doing ngrok tcp 8080, but on a free ngrok plan I cannot reserve tcp addresses and ngrok is rather slow, so I opted to try and use pagekite.
Pagekite normally allows easy tunnelling to an HTTP address, but they have a guide here about how to use PuTTY along with Pagekite to create a TCP tunnel proxied by HTTP.
I followed their guide but could use some help figuring out if it does what I want it to do.
I am working on a Linux VM, so I first set up an SSH server with openssh like this: sudo service ssh start
I then exposed that SSH server using pagekite like this: python3 pagekite.py 22 ssh:user.pagekite.me
I then started PuTTY, and configured the Host Name to be user.pagekite.me on port 22, setup an HTTP proxy with the proxy hostname user.pagekite.me on port 443 and finally created a tunnel from the PuTTY machine with source port 8080 and destination localhost:8080.
Now I am not sure what this actually accomplished. I know that the PuTTY machine connected to the ssh server running on my VM and I am able to use the linux terminal from the PuTTY terminal but has this actually created a TCP tunnel from user.pagekite.me:8080 to localhost:8080? Additionally after doing this, if I try to setup the handler on localhost:8080 I get the following error:
Handler failed to bind to 0.0.0.0:8080
Rex::BindFailed The address is already in use or unavailable: (0.0.0.0:8080).
Again I am completely clueless when it comes to networking so if anyone could explain what it is I'm doing and if it is even possible to do what I want the way that I am doing it, that would be quite helpful.
Basic Overview
We are trying to set up Rate Limiting on our server. we are using Nginx as a webserver and fail2ban for blocking IPs with Iptables.
IPtables can block IPs if a request hits direct our Nginx server(in this case $remote_addr is client IP).
But if it comes via some proxy server then proxy server passes client IP in X-Fordwarded-For header and Iptables unable to detect that(in this case $remote_addr is proxy server IP).
Is their some other ways we can block X-Fordwarded-For header IP?
any help will be appreciable
IPtable IP block commmand - iptables -A INPUT -s 111.112.212.112 -j DROP
You can not do that using iptables (especially if the packets are encrypted due to https traffic).
But if you use fail2ban and nginx, you can try the action nginx-block-map. Just use variable $http_x_forwarded_for in the map (see action description) and provide it in log, fail2ban will monitor, so the filter would be able to capture it as an ID to ban.
I have nginx server behind reverse proxy (Cloudflare) and want to block ips based on the xforwarded ip sent in the header.
I have tried the following iptables string matching rule :
iptables -A INPUT -m string --string "1.1.1.1" --algo bm --to 1024 -j DROP
However this doesn't seem to do anything.
Why isn't the string matching working ? I'm sure the real ip is sent in the packet , either as X-Forwarded-For or CF-Connecting-IP.
Kernel is 3.4.x and iptables 1.4.7, so no issues there .
As you mention CF-Connecting-IP is the best way to get the real IP behind CloudFlare. This is better than X-Forwarded-For as that can be changed if your server is then placed behind a load balancer or another reverse proxy (X-Forwarded-For even supports a comma separated list in it's RFC).
CloudFlare should only pass secure traffic and only web traffic to CloudFlare supported web server ports, therefore you can whitelist CloudFlare IPs and enable IPTables on other IPs. You can then block IPs in the Firewall tab of the CloudFlare site in question, then looking under IP Firewall. Non-CloudFlare traffic can then have IPTables applied to it.
We use the official Mod_CloudFlare on our Apache servers in order to correctly get the IP Address to our web server and ultimately into the web application itself. On NGinX you can try the ngx_http_realip_module.
I'm a little confused with nginx and iptables, I want to redirect all traffic to port 443 or port 8443 on my server, I also have mongodb running on port 27017, by blocking it, will I be able to access the database from my node.js app (which is running on port 8443)? should I use nginx to redirect or iptables? it seems that sometimes they overlap each other. So, which one is better to handle this?