Handling ports in Shipyard Load Balancer with Docker - nginx

I'd like to have Shipyard running in my server and I'm trying to run it with shipyard/deploy container. It runs multiple containers and one of them is a load balancer that runs at port 80.
Problem is that I'm handling my containers with Nginx installed on the host, out of containers, at port 80 too. Of course both cannot be together so I need to customize Shipyard load balancer run to be mapped to a different port in my host. I don't see an easy way to do this, how can be this situation handled? The way containers are linked is dependent of the mapping with host ports?
I'm wondering too if the way I'm handling my containers is the proper one..For example, I'm planning to add a Redmine instance. By default the trusted build is running on port 80 but I guess I can just map to other port and configure nginx to point there when accessing redmine.domain.com. There is a better way to handle this?
Thanks!!

Right; you need to change the port (or the IP address, if your machine has multiple IP addresses), either for your Nginx load balancer, or for Shipyard's.
You can customize Shipyard by altering the run.sh file and rebuilding its container.

Related

How to access the application from other device in local network

In the project I am working on, there is an application that works on many docker containers. To access one of the containers I need to add the following path in the /etc/hosts file
127.0.0.1 my.domain.com
Then App of course is available on http://my.domain.com in my computer.
Unfortunately, This is large complicated application and I cannot change the configuration to add a port (then i would use 192.168.X.X:PORT from other device)? so How I would to be able to access the application from other device in local network (WIFI or other way)? I try using localtunnel or ngrok but this works too slow and aren't good in this case.
Maybe someone knows another way?
If your server is running on ip 192.168.X.X on you local network, adding the line:
192.168.X.X my.domain.com
to the second device on your network should do the job
Another solution is to run a proxy server on the same instance as your server and send all the requests to the proxy server. The proxy server will listen on another port but it will forward all the requests to my.domain.com with the original port, it will work since it uses the same /etc/hosts.
try using nginx-webserver proxy it's free version it offers the feature what you want.
add a reverse proxy and host your app with my.domain.com
OR
Host your app on port :80 ie. the default port

NGINX: how to manage different visibility (LAN vs Internet)

i've setup a rasberry server with open media vault on board, so i'm using docker to setup multiple services such as:
pihole
plex
nextcloud
and much more
i would like to expose some of them on internet, while others only on LAN.
For internet i will use SSL from letsencrypt, while for LAN i can use a self-signed certificate.
Right now, i'm thinking to create multiple domains on two ports, one dedicated for internet and the other for lan, but... are there better alternatives (also from a security standpoint) ?
P.S.: right now i'm not considering VPN as an alternative
for the people who will read this, i solved in this way using only a nginx instance (on linuxserver swag image).
Created another server block (ports 8443)
Created a self signed certificate for the new server block
Opened on the router and port forwarded on 443 in order to expose only what i want to expose publicly using proxy-conf
Did the same for server block 8443 which is not exposed, so no port forwarding

localhost within docker user defined network?

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

Create a gateway or NAT instance

I have an ELB (in EC2 classic) running and one of my client want to hardcode an IP to his firewall rule to access our site.
I know that ELB doesn't provide static IP but is there a way to set up an instance (just for them) that they could hit and be used as gateway to our API?
(I was thinking of using HA Proxy on OpsWorks but it points directly to my instances and I need something that points to my ELB because SSL resolution happens at this level)
Any recommendation would be very helpful.
I assume you are running one or more instances behind your ELB.
You should be able to assign an elastic IP to one of those instances. In EC2 classic, EIPs will need to get reattached when you restart your instance.

How to hide Docker containers behind a single hostname

I'm pretty new to Docker. I started by approaching with the VM mindset, but I'm realizing that it uses a whole different paradigm from VMs, or even traditional LXC containers.
The biggest challenge has been with understanding how networking works. I'm trying to use Docker to run multiple services on a machine that require some of the same ports, to avoid port conflicts.
I want to access all of them using the FQDN of the host machine, without having to worry about adding the container FQDNs to DNS. I'm forwarding the relevant container ports to unused host ports.
The problem is that, when I try to access the services from my browser, it's redirected to the FQDN of the container, which it can't resolve. The result is a "Server not found" error.
Is there a way to hide all the containers behind the host's FQDN, without ever having to resolve the containers' FQDNs?
You can make each docker container use a different outside port and then have a server docker with something like nginx or apache that reverse proxies the requests. I had to build something like this that takes everything in at one hostname and then passes through all the traffic to the appropriate container and port.
The difficulty is docker containers having new addresses each time they're created. You can dynamically figure out their addresses when they start up and have the proxy container start last with those addresses. The way you can grab those addresses is with a 'docker inspect' and awk the data you want, or you can use one of these libraries like docker-py to grab the relevant data.

Resources