I have alot of domain names. configured as virtual hosts in nginx. They all have same document root. I want to restrict access to huge amount of different ip. Is there any way to do it in highest level then virtual hosts configuration? For example in main http{} block.
Yes you can use ngx_http_geo_module to block certain IPs from accessing your virtual hosts. This module helps you create variables with values based on client IP address. You can define the IPs at http level. Here is an example.
http {
geo $spamers {
# --allow all --
default no;
#-- block these bad ips --
192.0.171.118 spam;
192.0.179.119 spam;
192.0.179.120 spam;
192.128.168.0/20 spam;
}
You can add as many IPs you want and then in the location blocks of the servers that you want to restrict these IPs you can add a check
location ~* /mysite/www/ {
if ( $spamers = spam ) {
# -- Return a forbidden message
return 403;
}
Related
I have a super basic question. I have a GoDaddy account set up with subdomain xxx.mydomain.com. I also have some services running in an AWS instance on xxx.xxx.xxx.xxx:7000. My question is, what do I do to configure so that when people click xxx.mydomain.com it goes to xxx.xxx.xxx.xxx:7000?
I am not talking about domain forwarding. In fact, I also hope to do the same for yyy.mydomain.com to link it to xxx.xxx.xxx.xxx:5000. I am running Ngnix in xxx.xxx.xxx.xxx. Maybe I need to configure something there?
You want a reverse proxy.
Add two A-records to your DNS configuration to map the subdomains to the IP address of the AWS instance. With GoDaddy, put xxx / yyy in the "Host" field and the IP address in the "Points to" field. (more info)
Since you already have Nginx running, you can use it as a reverse proxy for the two subdomains. Therefore, add two more server blocks to Nginx's configuration file. A very simple one could look like this:
http {
# ...
server {
server_name xxx.mydomain.com;
location / {
proxy_pass http://localhost:7000;
}
}
server {
server_name yyy.mydomain.com;
location / {
proxy_pass http://localhost:5000;
}
}
}
You might want to rewrite some headers depending on your services/applications (more info). Also, consider to use Nginx for SSL termination (more info).
I just migrated my Drupal 8 site from an Apache server to Nginx.
I applied the configuration below :
https://www.nginx.com/resources/wiki/start/topics/recipes/drupal/
I do not understand what this block is for. Should I enter the IP address of my server instead of this one ?
# Very rarely should these ever be accessed outside of your lan
location ~* \.(txt|log)$ {
allow 192.168.0.0/16;
deny all;
}
The rule will only be useful if you have .txt or .log files in a directory accessible through the web server.
If that is the case, for security reasons, you should list all ip addresses that can access those files. All other addresses will be banned.
However, it is very unlikely that you want to serve log files via http, so you could just deny all.
More information in the nginx docs:
http://nginx.org/en/docs/http/ngx_http_access_module.html
I use Nginx to handle HTTP requests. During access log inspection, I found a lot of suspicious requests from the same IP address.
I'd like to configure Nginx to refuse connections from hosts like that one; I don't think that there will be a lot of hosts because it was the first one for years.
This is basically how the Nginx geo-ip module works, I've done a similar thing to whitelist Google crawlers on my sites.
In your http block define a geo directive and add the CIDR ip ranges you wish to block:
geo $badips {
default 0;
64.233.160.0/19 1;
66.102.0.0/20 1;
...
}
This will set the value of variable $badips to 1 for requests originating from those ip addresses.
Then in your server block, before any location blocks, add:
if ($badips) {
return 444;
}
Reload Nginx and that's it, requests which trigger $bdips to be set to 1 will be server a 444 response code (you can change it to another if you prefer).
If you want to keep the banned addresses in a different file then you can do that and inside the geo directive just add include path/to/file;. Syntax within the included file must be the same as above.
We're having a setup where our server, running a symfony2 application, is inside the client's network.
Is there a way to allow only the /api* path to be accessed from an external network (=the internet).
I'm assuming the best approach is by configuring nginx but i can only find blocking all or no url.
Try this:
location /api/ {
# Deny private IPv4 address spaces
deny 10.0.0.0/8;
deny 172.16.0.0/12;
deny 192.168.0.0/16;
allow all;
}
See http://wiki.nginx.org/HttpAccessModule for more information.
I'm planning to build an environment that can programmatically setup child servers and sandbox them using nginx/ha. First I would ensure *.example.com points to nginx/ha. Then, for example, I would setup app x to only serve from x.example.com and then to allow app x to talk to a specific method of app y, I would add the following config:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y.example.com;
}
}
(And the corresponding haproxy config if I were to use ha)
My question is, how many servers and locations like this could I include in a given instance of nginx or haproxy while still maintaining high performance ? I know I can move access restrictions up a layer into the applications themselves though I'd prefer it at the network layer
Edit:
Answer is in the comments below. Essentially, if the config can fit in RAM, performance won't be affected.
You should generate nginx config with many server blocks (one per domain) like this:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y;
}
}
Reference:
http://nginx.org/en/docs/http/server_names.html
http://nginx.org/en/docs/http/request_processing.html