SSL certificate not working on Nginx Proxy Manager (Cloudflare DNS) - nginx

I set up a Cloudflare account and redirected my domain to its nameservers. During setup I left all the settings at default.
I added two DNS records:
An "A" record targetting my IP address and a "CNAME" record creating an alias for it.
In my Nginx Proxy Manager (running in Docker on a bridged network connected with a database), there is only one proxy host directing the "CNAME" alias to a LAN IP (https://192.168.0.50:9443; Portainer operates on HTTPS).
Everything works flawlessly until I decide to add an SSL certificate. The only option I tick is "Force SSL". When I try to access the site at this point, it loads for a bit and then times-out to the "522" error. The only way I can get the site to work is to clear the Nginx volumes and restart the stack.
Turning Cloudflare proxy off doesn't seem to make any difference. Neither does trying to access different docker containers operating on HTTP

I managed to solve the problem. I didn't forward port "443" on my router to the target device... I hope that this helps anyone else who made this mistake.

Related

nginx configuration: Domain working but IP address cause security warning from Chrome

I have configured HTTPS on my nginx server where I host example.com and data.example.com, and I also set up http forwarding to https. If I use the domain names as written everything works great, certificates are valid and there are no issues. However, when I type in the IP addresses of the servers, I get a NET::ERR_CERT_COMMON_NAME_INVALID security warning from chrome.
I do not know if this is expected behaviour or if my configuration is not set up properly. Is this normal/expected?
Second, I am using AWS Elastic IP addresses, so if I wanted the IPs to work as well, how can I do that? When I followed some tutorial for another website (which uses apache + certbot) both the domain and the elastic IPs work, which is why I am confused now.
Thank you

Best software for dynamic dns proxying to docker containers

Currently i am using haproxy with manual updating backends which points to separate docker nginx containers for different apps.
What is best software to proxying request to different local nginx containers based on hostname?
I would have a simple map file or even /etc/hosts/ which my script would update when docker containers change, for example:
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
So ideal will be haproxy -> some software proxy or dns -> docker nginx
and software would use map file in fly, not reloading and point request to local ip address.
Maybe i would put varnish cache in front so it would need to be compatible with that too (and why wouldn't) so flow would be:
request -> haproxy (for load balancing in multiple servers)
-> varnish on public server ip ( for in memory caching based on host and route, so if there is cache return response immediately )
-> SOME PROXY OR DNS BASED ON SIMPLE MAP FILE which will further proxy to local ip of one of multiple docker nginx containers
-> docker nginx inside custom network
-> some app in container
What is best practice for this flow, should i put varnish somewhere else, and what is a software i am seeking for?
I am currently using one extra nginx and mapping $host to custom ip address in custom maps.conf file and gracefully reloading nginx on change, but i got feeling that there is better solution for this.
Also i forgot to mention that i dont need only http proxying based on map file, but tcp (ssh, smtp, ftp..) too, just in those cases i will not have haproxy and varnish in front and this app would be public faced on those port.
for example:
port:22
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
port:25
domain1 1.1.1.4
domain2 1.1.1.5
domain3 1.1.1.6
I think something like Muguet might solve your issue.
From their github repo:
When using Docker, it's sometimes a pain to access your containers
using specific IPs/ports.
Muguet provides you with a DNS Server that resolves auto-generated
hostnames to your containers IPs, plus a Reverse Proxy to access all
your web apps on port 80.
I think what you want is dnsmasq. This basically is a lightweight DNS service you run on your host running docker containers and it allows you to use hostnames instead of IP addresses. It's a pretty common way to solve this issue.
A nice guide to setting up dnsmasq can be found at:
http://docs.blowb.org/setup-host/dnsmasq.html
and searching dnsmasq and docker will point you to many more resources.
One thing to remember is on your haproxy host, make sure you modify the /etc/resolv.conf to include your dnsmasq server.

Multiple Web Applications - Same VM vs Multiple VMs

Firstly, I am more of a dev than admin. And I have always asked questions here. But please let me know if there is a better place to ask this question.
Here's my situation. I have an application that is built to run on linux. It serves both https (on port 443 using nginx) and ssh (on port 22). But due to organizational restrictions, I am forced to run it on a windows host with a linux guest using virtual box. Also, there is another web application on the host box; both these web applications should be served based on the URL (example: app1.com, app2.com). URLs need to be preserved. All ssh traffic can default to guest.
One idea I have to make this work is below, and I would like to know
if I am making this more complicated than it should be. Any help is appreciated.
Steps:
Use an unused port for https (say 8443) on my host and redirect all
traffic to the guest. Use NAT based port forwarding (8443 -> 443, 22 -> 22)
in Virtualbox.
The only thing left would be to setup another nginx on
the host as a reverse proxy. Set up virtual hosts on windows
(/etc/hosts) and have the two IP and URL entries (app1.com and app2.com).
Use a separate nginx on the host as a reverse proxy to redirect app1 traffic
to the web app on the host and app2 traffic to 8443.
Questions:
Can I avoid the extra nginx reverse proxy on the host while preserving the URL?
Also what about ssl. Can I just set up https on the host and route it to port 80 on guest and avoid having two certs? Note: I am using NAT in Virtualbox, so there should not be any security issues I guess.
This is an administration question, and the user posted the identical question to serverfault, where it was answered: https://serverfault.com/questions/835220/multiple-web-applications-same-vm-vs-multiple-vms

Docker DNS setup on VPS

I have a VPS with static IP address (108.1.2.3 for ex). On this server I have a two docker containers with separate IP (10.1.2.3 and 10.1.2.4 for ex). And I have two domains: domain1.com and domain2.com.
My question is: how I can setup a DNS server for this two domains?
I need to point domain1.com to 10.1.2.3, domain2.com to 10.1.2.4 and have an access through browser for each domain.
I found a solution, but it doesn't work for me.
Unless you add network interfaces to the VPS and give it multiple static IPs and bind the container ports to these IPs (using docker run -p with ip:port:c_port value), you will need some kind of reverse proxy.
When using a reverse proxy such as nginx, your issue with nginx seems to be the need to reload. Please note that, you won't only need to reload every time a new container is launched, but also every time a container is restarted (if you use an nginx container internally linked to the other containers..)
What you need is service discovery and configuration listeners to reload your reverse proxy automatically such as: etcd+confd or https://consul.io/

Nginx two virtual hosts on with domain name one in localhost

On my Nginx I've got two hosts.
One with the values
server_name = www.mydomain.com;
root /var/www/production/myFirstWebSite;
and the other with
server_name=localhost;
root /var/www/development/mySecondWebSite;
To my domain registrar account I configured the DNS with two A record "
www IN A myIP
IN A myIP
This is cool, i can reach my first website with www.mydomain.com or mydomain.com.
Now the problem is how to reach my second website which is in development and I don't buy the domain name. And myIP/development/myScondWebSite is no more working ...
I think that the problem come from the DNS entries but I'm not sure.
Do you've got some ideas ?
Thanks in advance.
There's a couple of ways I could think of to access the localhost one.
Creating a subdomain instead of localhost
This is the best one I'd recommend, try doing something like server_name localhost.mydomain.com.
If you need to put further security, you could make it only allow a certain IP(s) or a range of IPs.
Play with your hosts file
In this specific case I would not recommend this, because you're messing with localhost it self, might break some other stuff on your machine, if it was any other name I could have said it's fine.
Use an ssh tunnel to the server
In this method you create a dynamic port on your ssh connection and set your browser to pass all traffic through tunnel which goes to the server then it's handled from there, so if you run localhost for example it would be like running localhost from over there, but since this involved a browser setting, you need to remember to disable it after you disconnect the ssh connection otherwise the browser would return an error saying that the proxy server is refusing the connection.
Using a local Nginx as a proxy
This one I just came up with right now, and I can't say If it would work or not, the 3 before I've worked with before and I know they work.
You'd set a certain domain name that your local nginx would capture and then proxy it to the remote server, but edit the host header setting it to localhost instead, that way it would match the localhost in the remote machine, if this one works it would not need any setting to be turned on and off every time.
Out of all these, I'd recommend the first one first (if it's an option), then try the last one if you don't want to keep turning things on and off before and after each setting.

Resources