I have a server which hosts several Docker containers including an Nginx reverse proxy to serve content. In order to get status of this server I have added the following location block:
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.0.0.0/8;
deny all;
}
Under normal circumstances I would only have opened up 127.0.0.1 but that means that the host machine would not have access (only the Nginx container itself would) so I opened up all of the 172 addresses. Is there a cleaner/more secure way of doing this or is my approach reasonable for a production environment?
When docker starts it creates an interface docker0 that is an ethernet bridge, and assigns it an IP address. Docker tries to choose a smart default, and the 172.17.0.0/16 range is a good default. The host will route all traffic destined for that network to the docker0 bridge, and it's not accessible externally unless you've mapped a port.
In your question you've allowed 172.0.0.0/8, some of which is not RFC1918 private address space. You could restrict this further to either all of the addresses in the Docker network driver source I linked before, or simply 172.17.0.0/16 since that's the first in the list and is usually used.
Related
I set up a wireguard instance in a docker container and use nginx proxy manager to set up all reverse proxy settings. Now I want the website to be only accessible when I am connected to the VPN.
I tried to add localhost as the forward address and set the only allow to the local server ip, but it doesn't work and just displays a cant connect to server message in my browser.
Add this to a server block (or a location or http block) in your nginx configuration:
allow IP_ADDRESS_OR_NETWORK; # allow only connections from Wireguard VPN network
deny all; # block the rest of the world
The allowed network has to match your specific Wireguard VPN network. All peer IP addresses which should have access must be part of the network range. Depending on your NAT settings, you should verify the actual IP address or network by checking the access log: tail -f /var/log/nginx/access.log
Be sure to reload your nginx config to apply changes: service nginx reload
See also http://nginx.org/en/docs/http/ngx_http_access_module.html for usage hints on the HTTP access module.
I have a VM running on GCP and got my docker installed on it. I have NGINX web server running on it with a static reserved external/public IP address. I can easily access this site by the public IP address. Now, I have my Artifactory running on this VM as a Docker container and the whole idea is to access this Docker container (Artifactory to be precise) using the same public IP address with a specific port, say 8081. I have configured the reverse proxy in the NGINX web server to bypass the request to the internal IP address of my docker container of Artifactory but the request is not reaching to it and cannot access the Artifactory.
Docker container is running:-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a4119d923hd8 docker.bintray.io/jfrog/artifactory-pro:latest "/entrypoint-artifac…" 57 minutes ago Up 57 minutes 0.0.0.0:8081->8081/tcp my-app-dev-artifactory-pro
Here are my reverse proxy settings:-
server {
listen 81;
listen [::]:81;
server_name [My External Public IP Address];
location / {
proxy_pass https://localhost:8081;
}
}
Since you are using GCP to run this, I think that your issue is very simple. First, you do not have to have an Nginx in order to get to Artifactory inside a Docker container. You should be able to reach it very easily using the IP and port (for example XX.XX.XX.XX:8081) and I can see that in the Nginx configuration you are listening to port 81 which is not in use by Artifactory. I think that the issue here is either you did not allow HTTP communication to your GCP instance in the instance configuration, or you did not map the port in the "docker run" command.
You can see if the port is mapped by running the command "docker ps" and see if in the "PORTS" section there are ports that are mapped. If not, you will need to map the port (8081 to 8081) and make sure you GCP instance have HTTP traffic enabled, then you will be able to get to Artifactory with IP:PORT.
I am running simple rails application on ubuntu and I am using nginx as my web server. I would like to block all ip address except our office ip address(static ip).
Now I can block ip using nginx
location / {
allow office_ip_address;
deny all;
}
or I can block ip using ufw uncomplicated firewall.
sudo ufw allow from office_ip_address
(will this block all other ip? or do I need some command to block all other ips?)
I would like to know which approach is better? I think it's better to block ip on firewall level so request don't come to our server at all. I am new to setting up servers so please advice me which way is better?
I have nginx server behind reverse proxy (Cloudflare) and want to block ips based on the xforwarded ip sent in the header.
I have tried the following iptables string matching rule :
iptables -A INPUT -m string --string "1.1.1.1" --algo bm --to 1024 -j DROP
However this doesn't seem to do anything.
Why isn't the string matching working ? I'm sure the real ip is sent in the packet , either as X-Forwarded-For or CF-Connecting-IP.
Kernel is 3.4.x and iptables 1.4.7, so no issues there .
As you mention CF-Connecting-IP is the best way to get the real IP behind CloudFlare. This is better than X-Forwarded-For as that can be changed if your server is then placed behind a load balancer or another reverse proxy (X-Forwarded-For even supports a comma separated list in it's RFC).
CloudFlare should only pass secure traffic and only web traffic to CloudFlare supported web server ports, therefore you can whitelist CloudFlare IPs and enable IPTables on other IPs. You can then block IPs in the Firewall tab of the CloudFlare site in question, then looking under IP Firewall. Non-CloudFlare traffic can then have IPTables applied to it.
We use the official Mod_CloudFlare on our Apache servers in order to correctly get the IP Address to our web server and ultimately into the web application itself. On NGinX you can try the ngx_http_realip_module.
I have a ngnix server set up and running locally for some development testing. I want to be able to connect to it over the net. I have a device on the local network that I want to connect to the server with. How would I do this? The device and my comp are both connected in a VPN. The VPN gives me an ip address. Shouldn't the device be able to connect to that ip address since localhost and the ip are the same?
server {
listen 8080;
server_name localhost;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
If your server only listen on localhost(127.0.0.1), other machines have no way to access your server.
You must listen on a specific IP, and other machines can connect to your server through this IP.
There is a big diffrence from localhost (127.0.0.1) and the computers IP address
for example:(192.168.80.10) The diffrence is that localhost is only accessable from your computer.
You'll have to use your computers IP address when you want to connect from a diffrent machine over your local network (or in your case a VPN solution). To get your computers IP address for windows:
Press start.
type in cmd into the search bar.
when a black console shows up, type in ipconfig
Look for IPV4-address, and to the right is your computers local IP.
You might not need to change the config files of the server, because the server might be automaticly set up to listen to your local IP. I would suggest trying to conenct localy with your local IP address before trying to change configuration files.
Hope this helped!
-kad