I use Nginx to handle HTTP requests. During access log inspection, I found a lot of suspicious requests from the same IP address.
I'd like to configure Nginx to refuse connections from hosts like that one; I don't think that there will be a lot of hosts because it was the first one for years.
This is basically how the Nginx geo-ip module works, I've done a similar thing to whitelist Google crawlers on my sites.
In your http block define a geo directive and add the CIDR ip ranges you wish to block:
geo $badips {
default 0;
64.233.160.0/19 1;
66.102.0.0/20 1;
...
}
This will set the value of variable $badips to 1 for requests originating from those ip addresses.
Then in your server block, before any location blocks, add:
if ($badips) {
return 444;
}
Reload Nginx and that's it, requests which trigger $bdips to be set to 1 will be server a 444 response code (you can change it to another if you prefer).
If you want to keep the banned addresses in a different file then you can do that and inside the geo directive just add include path/to/file;. Syntax within the included file must be the same as above.
Related
I have a super basic question. I have a GoDaddy account set up with subdomain xxx.mydomain.com. I also have some services running in an AWS instance on xxx.xxx.xxx.xxx:7000. My question is, what do I do to configure so that when people click xxx.mydomain.com it goes to xxx.xxx.xxx.xxx:7000?
I am not talking about domain forwarding. In fact, I also hope to do the same for yyy.mydomain.com to link it to xxx.xxx.xxx.xxx:5000. I am running Ngnix in xxx.xxx.xxx.xxx. Maybe I need to configure something there?
You want a reverse proxy.
Add two A-records to your DNS configuration to map the subdomains to the IP address of the AWS instance. With GoDaddy, put xxx / yyy in the "Host" field and the IP address in the "Points to" field. (more info)
Since you already have Nginx running, you can use it as a reverse proxy for the two subdomains. Therefore, add two more server blocks to Nginx's configuration file. A very simple one could look like this:
http {
# ...
server {
server_name xxx.mydomain.com;
location / {
proxy_pass http://localhost:7000;
}
}
server {
server_name yyy.mydomain.com;
location / {
proxy_pass http://localhost:5000;
}
}
}
You might want to rewrite some headers depending on your services/applications (more info). Also, consider to use Nginx for SSL termination (more info).
Duplicated from the GitHub issue:
Please see the relevant areas of nginx.conf.erb below. I've tested something like this locally on my machine with nginx and 127.0.0.1 so I believe I'm using the directives properly. However when I add a CIDR group that contains my IP address and put this on my Heroku dyno I can still get through to my site.
Any ideas why this isn't being picked up properly or something I'm missing? Does this feature not work on Heroku?
http {
geo $test_group {
default 0;
1.2.3.4/22 1; # this includes my local IP address
}
server {
location / {
if ($test_group) {
return 418 "short AND stout";
}
}
}
}
My best guess is that theres a decent amount of IP forwarding going on behind the scenes that is screwing around with NGINX. For example if I do curl -i -H "X-Forwarded-For: 1.2.3.4" "https://my-cool-app.herokuapp.com" I still see my real IP and it does not get picked up by the NGINX rule.
tl;dr
It is related to IP forwarding so the proxy directive must be used. Simply add
proxy 10.0.0.0/8;
inside geo and you're good to go.
Whaaa!? How and Why
Heroku forwards all internal requests on private IP addresses in the 10.x.x.x range. This is probably obvious and immediately makes sense to folks more familiar with networking and/or NGINX. I was able to figure it out by returning $remote_addr as a header and noticed that it was not my public IP address. While the docs leave a lot to be desired for the novice, the proxy description clued me in that if I proxied the CIDR range for private IPs (10.0.0.0/8), the X-Forwarded-For address would be used instead.
I know I could use some like this:
stream {
upstream ssh {
server X.X.X.X:22;
}
server {
listen 2222;
proxy_pass ssh;
}
}
to proxy pass incoming traffic to port 2222 to another IP's port 22.
Straightforward. But, is there a way to create a dynamic proxy that accepts final destination's hostname and port as parameters?
Something that could be used like this:
proxy_hostname:8080?destination_hostname=example.com&destination_port=1111
ngx_stream_core_module does not accept url parameters. Could nginx be used as a dymanic proxy or only for static tunneling?
I'm asking this because I need a way to hide the IP of a machine firing php mysql requests.
mysqli_connect($hostname, ...)
right now I cannot specify a proxy for the php script alone, only for the entire machine.
Maybe with a small script and fcgiwrap:
https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/
fcgiwrap calls a bash script where you can convert the URI to the program you want to call (mysql) and return the output to nginx as web content.
You could also alter the config of nginx and reload the service. This way you could "dynamicly" open/forward ports. Quite insecure if you make it publicly available.
I have alot of domain names. configured as virtual hosts in nginx. They all have same document root. I want to restrict access to huge amount of different ip. Is there any way to do it in highest level then virtual hosts configuration? For example in main http{} block.
Yes you can use ngx_http_geo_module to block certain IPs from accessing your virtual hosts. This module helps you create variables with values based on client IP address. You can define the IPs at http level. Here is an example.
http {
geo $spamers {
# --allow all --
default no;
#-- block these bad ips --
192.0.171.118 spam;
192.0.179.119 spam;
192.0.179.120 spam;
192.128.168.0/20 spam;
}
You can add as many IPs you want and then in the location blocks of the servers that you want to restrict these IPs you can add a check
location ~* /mysite/www/ {
if ( $spamers = spam ) {
# -- Return a forbidden message
return 403;
}
I need to assign different IP addresses to different processes (mostly PHP & Ruby programs) running on my Linux server. They will be making queries to various servers, including the situation where processes connecting to the same external server should have different IPs.
How this can be achieved?
Any option (system wide, or PHP/Ruby-specific, using proxy servers etc) will suit me.
The processes bind sockets (both incoming and outgoing) to an interface (or multiple interfaces), addressable by IP address, with various ports. In order to have them directly addressable by different IP addresses, you must have them bind their sockets to different NICs (virtual or hardware).
You could point each process to a proxy (configure the hostname of the server to be queried to be a different proxy for each process), in which case the external server will see the different IPs of the proxies. Otherwise, if you could directly configure the processes to use different NICs for their communications, that would be ideal.
You may need to make changes to the code to make this configurable (very often, programmers create outgoing TCP connections with convenience functions without specifying the NIC they will use, as they typically don't care). In PHP, you can use "socket_bind" to bind the endpoint to a nic, e.g. see the first example in the docs for socket_bind.
As per #LeonardoRick request, I'm providing the details for the solution that I ended up with.
Say, I have a server with 172.16.0.1 and 172.16.0.2 IP addresses.
I set up nginx (on the same machine) with the configuration that was looking somewhat like this:
server {
# NEVER EXPOSE THIS SERVER TO THE INTERNET, MAKE SURE PORT 10024 is not available from outside
listen 127.0.0.1:10024;
# block access from outside on nginx level as well
allow 127.0.0.1;
deny all;
# actual proxy rules
location ~* ^/from-172-16-0-1/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.1;
proxy_pass http$1://$2?$args;
}
location ~* ^/from-172-16-0-2/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.2;
proxy_pass http$1://$2?$args;
}
}
(Actually I cannot remember all the details now (this code is 'from whiteboard', it's not an actual working one), nevertheless it should represent all the key ideas. Check regexes before deployment).
Double-check that port 10024 is firewalled and not accessible from outside, add extra authentication if necessary. Especially if you are running Docker.
This nginx setup makes it possible to run HTTP requests likehttp://127.0.0.1:10024/from-172-16-0-2/https://example.com/some-URN/object?argument1=something
Once received a request, nginx will fetch the HTTP response from the requested URL using the IP specified by the corresponding proxy_bind directive.
Then - as I was running in-house or open-source software - I simply configured it (or altered its code) so it would perform requests like the one above instead of (original) https://example.com/some-URN/object?argument1=something.
All the management - what IP should be used at the moment - was also done by 'my' software, it simply selected the necessary /from-172-16-0-XXX/ endpoint according to its business logic.
That worked very well for my original question/task. However, this may not be suitable for some other applications, where it could not be possible to alter the request URLs. However, a similar approach with setting some kind of proxy may work for those cases.
(If you are not familiar with nginx, there are some starting guides here and here)