Meteor mup using Lets Encrypt with multiple domains only 1 ip - meteor

I have several sites running on different (virtual) ubuntu servers, something like this:
mysite.mydomain.com (10.3.0.5)
another.mydomain.com (10.3.0.7)
siteabc.differentdomain.eu (10.3.0.16)
I had paying certificates for all of them, and was using MUP (Meteor-up) to deploy them:
proxy: {
domains: 'mysite.mydomain.com',
ssl: {
crt: './mysite_mydomain_com.crt',
key: './mysite_mydomain_com.key',
forceSSL: true
}
}
Now I want to use Lets Encrypt for all of them. I forwarded port 80 to 10.3.0.5 (the first site), and this works (MUP creates nginx docker containers automatically etc..), but the others don't work because they need port 80 which is already used for the first one.
proxy: {
domains: 'mysite.mydomain.com',
ssl: {
letsEncryptEmail: 'mysite#mydomain.com'
forceSSL: true
}
}
Is it possible to have multiple domains behind the same ip, and still use Lets Encrypt? And how would I do that for Meteor applications and Meteor-up deployments?

Yes is the short answer. MUP installs a docker image running nginx proxy https://github.com/nginx-proxy/nginx-proxy
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See Automated Nginx Reverse Proxy for Docker for why you might want to use this.
You don't need to worry about the details, it automatically directs traffic based on the target URL to the correct docker instance. I run several staging/demo servers on the same EC2 instance using it, Easy :)

Related

Is it possible to reverse proxy to native running applications from a containerized Nginx in K3S?

On my server I run some applications directly on the host. In parallel I have a single-node K3S that also contains a few applications. To be able to manage the traffic routing and HTTPS certificates to the individual services in a central place I want to use Nginx. In the cluster runs a traefik ingress controller which I use for the routing in this context.
To be able to reverse proxy to each application, no matter if it runs directly on the host or in a container in K3S, Nginx must be able to reach the applications locally, no matter where it runs (without the traffic leaving the server). E.g. proxy myservice.mydomain.com to localhost:8080 from Nginx should end up on the webserver of a nativly running application and myservice2.mydomain.com to the webserver of a container in K3S.
Now, is this possible if the Nginx runs in the K3S cluster or do I have to install it directly on the host machine?
If you want to use Nginx that way yes you can do it.
keeping Nginx in front of Host and K3S also.
You can expose your service as NodePort from K3s and while local servie that you will be running on Host machine will be also running on one Port.
in this Nginx will forward the traffic like
Nginx -> Port-(8080) MachineIp: 8080 -> Application on K3s
|
Port-(3000) MachineIp: 3000 -> Application running on Host
Example : https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/

Best software for dynamic dns proxying to docker containers

Currently i am using haproxy with manual updating backends which points to separate docker nginx containers for different apps.
What is best software to proxying request to different local nginx containers based on hostname?
I would have a simple map file or even /etc/hosts/ which my script would update when docker containers change, for example:
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
So ideal will be haproxy -> some software proxy or dns -> docker nginx
and software would use map file in fly, not reloading and point request to local ip address.
Maybe i would put varnish cache in front so it would need to be compatible with that too (and why wouldn't) so flow would be:
request -> haproxy (for load balancing in multiple servers)
-> varnish on public server ip ( for in memory caching based on host and route, so if there is cache return response immediately )
-> SOME PROXY OR DNS BASED ON SIMPLE MAP FILE which will further proxy to local ip of one of multiple docker nginx containers
-> docker nginx inside custom network
-> some app in container
What is best practice for this flow, should i put varnish somewhere else, and what is a software i am seeking for?
I am currently using one extra nginx and mapping $host to custom ip address in custom maps.conf file and gracefully reloading nginx on change, but i got feeling that there is better solution for this.
Also i forgot to mention that i dont need only http proxying based on map file, but tcp (ssh, smtp, ftp..) too, just in those cases i will not have haproxy and varnish in front and this app would be public faced on those port.
for example:
port:22
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
port:25
domain1 1.1.1.4
domain2 1.1.1.5
domain3 1.1.1.6
I think something like Muguet might solve your issue.
From their github repo:
When using Docker, it's sometimes a pain to access your containers
using specific IPs/ports.
Muguet provides you with a DNS Server that resolves auto-generated
hostnames to your containers IPs, plus a Reverse Proxy to access all
your web apps on port 80.
I think what you want is dnsmasq. This basically is a lightweight DNS service you run on your host running docker containers and it allows you to use hostnames instead of IP addresses. It's a pretty common way to solve this issue.
A nice guide to setting up dnsmasq can be found at:
http://docs.blowb.org/setup-host/dnsmasq.html
and searching dnsmasq and docker will point you to many more resources.
One thing to remember is on your haproxy host, make sure you modify the /etc/resolv.conf to include your dnsmasq server.

do docker container IPs change on restart?

I am new to docker and I have been dockerizing all my application in a single server. So far, everything is fine and working. However, I don't understand one thing. I am using docker-compose for everything (I haven't created dockerfile for my projects yet) and there is this ports attribute in docker-compose. If I write something like this:
ports:
8085:80
It will listen on 0.0.0.0:8085, which means outside world has access to my server. After some discussions and google-ing, I found that I can take an IP address in my docker bridge network and do port mappings easily:
ports:
172.17.0.1:8085:80
This will listen only on 172.17.0.1:8085, which is great as it is only listens on internally and my nginx proxies the traffic to the necessary ports. (e.g proxy_pass http://172.17.0.1:8085). After knowing more about docker and understanding how they work, I realized that all these containers have their own IP addresses with ports exposed only to those addresses. For example, one of my "web" containers has IPv4 address of 172.17.0.10 and port 80 is exposed. If I do docker inspect on one of these containers, I will see the IP address of the container.
Now, I want to use these IP addresses in my nginx. Instead of proxy_pass http://172.17.0.1:8085, I want to do http://172.17.0.10. I personally think that this is a much elegant interface but there is one thing that concerns me. What happens if I restart my machine? All the containers will start in some kind of order. If I have 5 web containers and they start in random order, can I be sure that the IPs for these containers will be the same? Or will they change? Should I always use ports in docker-compose for use by nginx? If yes, how can I have different IPs per container instead of different ports with the same IP? Will it be okay if I create another docker network interface (let's say in subnet 172.17.1.0), and assign different IPs from that interface to the exposed "public" ports? By this I mean basically using 172.17.1.1:80:80 in one container, 172.17.1.2:80:80 in another etc.
Not an expert in the docker-networking domain but I'll try my best to answer the questions you got there.
Q: What happens if I restart my machine? All the containers will start in some kind of order.
A: Unless you use the links or depends_on keywords, otherwise you cannot guarantee the start sequence.
Q: If I have 5 web containers and they start in random order, can I be sure that the IPs for these containers will be the same?
A: I did a little experiment on my machine by taking note of the ipaddresses of my existing 2 containers (postgresDB and influxDB).
They are running on
Postgres: 172.17.0.2
InfluxDB: 172.17.0.3
Shut it down and boot again. Probably due to them booting up the same order this time, the ip addresses seems to have been maintained. Added the depends_on keyword to force the InfluxDB container to start first before Postgres can, now the IP addresses of both containers are;
Postgres: 172.17.0.3
InfluxDB: 172.17.0.2
I think the IP is distributed based on a first come first serve basis. If you didn't specify an ordering for booting up the containers I think there is a small chance which the ip can be different for the containers. Really depends on who runs first.
Q: Should I always use ports in docker-compose for use by nginx?
A: Yes, if you would like to port forward an nginx instance's port to the outside world. Otherwise no one will be able to hit that web server. E.g. exposing port 443 to let HTTPs traffic come through.
Q: how can I have different IPs per container instead of different ports with the same IP?
A: I don't know whether that is possible or not but after doing some research for you on the docker-compose documentation it seems possible by using the ipam keyword.
See:
https://docs.docker.com/compose/compose-file/#/ipam
That looks scary to me so what I did for my own project was to use the service_name instead.
Example:
container_bbb:
image: banana
my_nginx:
image: apple
environment:
- MOUNT_SRC0=http://container_bbb:80
- MOUNT_DEST0=/
links:
- container_bbb
For this instance in the my_nginx container, the service name container_bbb would be resolved into the hostname of that container.
Then I will have a python script that would dynamically generate the ngix config using this information at the container's entrypoint script area.
Sounds a bit overkill but that gives me more control over what I want to do with my nginx.
So in my /etc/nginx/conf.d/default-locations/ the configs would be something like;
location /container_bbb/ {
proxy_pass http://container_bbb:3000/;
}
Note:
I am using this nginx server instance as a reverse proxy server.
I guess what I am trying to say here is that you can essentially use the hostname instead of the ip address. And to get to the container next-door you would pass in http://CONTAINER_SERVICE_NAME:PORT
You can't rely on the containers' IP addresses. If all your services are on the same docker-compose config, they will automatically be part of the same internal network and you can simply use the service names as the hostnames.
E.g., If your web app was named php, your nginx proxy config would look something like:
location / {
proxy_pass http://php;
proxy_set_header Host $host;
proxy_redirect off;
}
(Also note you might want to enable your firewall, just in case any of the port mappings leaks out to the public IP address of the host.)

Docker DNS setup on VPS

I have a VPS with static IP address (108.1.2.3 for ex). On this server I have a two docker containers with separate IP (10.1.2.3 and 10.1.2.4 for ex). And I have two domains: domain1.com and domain2.com.
My question is: how I can setup a DNS server for this two domains?
I need to point domain1.com to 10.1.2.3, domain2.com to 10.1.2.4 and have an access through browser for each domain.
I found a solution, but it doesn't work for me.
Unless you add network interfaces to the VPS and give it multiple static IPs and bind the container ports to these IPs (using docker run -p with ip:port:c_port value), you will need some kind of reverse proxy.
When using a reverse proxy such as nginx, your issue with nginx seems to be the need to reload. Please note that, you won't only need to reload every time a new container is launched, but also every time a container is restarted (if you use an nginx container internally linked to the other containers..)
What you need is service discovery and configuration listeners to reload your reverse proxy automatically such as: etcd+confd or https://consul.io/

docker registry on localhost with nginx proxy_pass

I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.

Resources