Managing and Utilizing Multiple Docker Containers (Microservices) in a Single Server - nginx

I have a GCE (Google Compute Engine) server running with the Nginx/Apache web server listing at port 80 which will serve the website. At the same time I have multiple microservices running in the same server as Docker containers. Each container will serve a website at it's appropriate local-IP Address as well as I have bind it to localhost:PORT. I don't want to bind the ports to the Public-IP address, Since it will publicly expose the microservices to the outside world.
Now the problem is, I have to embed the website pages served by the containers to the website which is running at port 80 of the web server. Since the embed code with we executed by the browser, I cannot use either the local-IP (172.17.0.x) or localhost:PORT in the python/HTML code.
Now how do I embed the pages of microservices running locally inside the containers to the website serving the users?
For Example:
My Server's Public IP: 104.145.178.114
The website is served from: 104.145.178.114:80
Inside the same server we have multiple microservices running in the local-IP like 172.17.0.1 and 172.17.0.2 and so on. Each container will have a server running inside itself which will server pages at 172.17.0.1:8080/test.html and similarly for the other containers also. Now I need to embed this page test.html to another web page which is served by the Nginx/Apache webserver at 104.145.178.114 without exposing the internal/Local-IP Port to the public.
I would like to hear suggestions and alternative solutions for this problem

I'm assuming Nginx has access to all internal docker ips (microservices). Unless I'm missing something, proxy_pass (http://nginx.org/en/docs/http/ngx_http_proxy_module.html) should work for you. You could assume a certain (externally available) url pattern to proxy to your microservice container without exposing the microservice port to the world.

Related

.NET 7 Blazor on Docker - Host Multiple Containers on Single Server

I am new to hosting, especially with Blazor & Docker. I have a Debian 11 server which currently hosts a Blazor Server project running in a Docker container. I do not have any web hosting service on my server itself; my project is currently hosted using my container's built-in Kestrel web host for ASP.NET. However, I am now trying to host a second website on my server.
From what I have read, DNS configuration can only point records to port 80, meaning I can't simply use a different port to host my other site (and have it indexed by search engines with SSL configured). I am trying to figure out how to host both Blazor containers on my server, with my DNS pointing to the same IP address, but I am not sure how to assign each domain to the separate containers. For example, my DNS looks like this:
A record / website1.com -> 10.0.0.1
A record / website2.com -> 10.0.0.1
How should I handle this configuration? Do I need to use a web hosting service directly on my server (such as apache) or is there a way to map the DNS to my interal Docker IP addresses? Or is there another way to figure this through my DNS altogether? Thank you.

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

How to point iframe to a url within a docker network?

I have two docker containers:
web, contains nginx with some static html
shiny, contains an R Shiny web application
When run, the shiny web-application is accessible through localhost:3838 on the host machine, while the static html site is accessed through localhost:80.
My goal is to make a multi-container application through docker-compose, where
users access the static html, and the static html occationally fetches data visualisations from shiny via <iframe src="url-to-shiny-application"></iframe>
I can't figure out how to point the iframe to a url that originates within the docker-compose network. Most people tend to host their Shiny apps on urls that are accessible through the Internet (ie. shinyapps.io), as a learning project I wanted to figure out a way to host a containerized shiny server alongside nginx.
Desired result would be the ability to simply write <iframe src="shiny-container/app_x"></iframe> in the static html, and it would find app_x on the shiny server through the docker network.
Is this something that can be sorted out through nginx configuration?
The answer is in your question already:
the shiny web-application is accessible through localhost:3838 on the host machine
So start the URL with http://localhost:3838. If you need this to be accessible from other hosts or you expect that published port number ever might change, you'll need to pass in a configuration option to say what the external URL actually is, or stand up a proxy in front of the two other containers that can do path-based routing.
Ultimately any URL you put in an <iframe src="...">, <a href="...">, and so on gets interpreted in a browser, which does not run in Docker. That means that references to things that might happen to be running in Docker containers in HTML content always need to use the host's hostname and the published port number; they can never use the Docker-internal hostnames.
Did you try this: https://docs.docker.com/compose/networking/?
Sample docker-compose file given:
version: "3"
services:
web:
image: web
ports:
- "8000:8000"
shiny:
image: shiny
ports:
- "3838:3838"
Each container can now look up the hostname web or shiny and get back the appropriate container’s IP address.
and use <iframe src="http://web:8000"/> or port 80, or 8080 or what you configure in the docker file.
You might be interested in "Server Side Includes", to embed web UIs of your microservices in your webpage:
https://developer.okta.com/blog/2019/08/08/micro-frontends-for-microservices#micro-frontends-to-the-rescue%EF%B8%8F%EF%B8%8F

How to send requests to a web server running inside a docker container inside an AWS EC2 machine from the outside world?

I have a Python Flask web server running inside a docker container that is running in an AWS EC2 Ubuntu machine. The container is running on a default network setting (docker0). Within the host EC2, I can send requests (Get, Post) to this web server using docker-machine ip (172.x.x.x) and the forwarded ports (3000: 3000) of the host.
url: http:// 172.x.x.x:3000 / <api address>
How can I send requests (GET, POST) to this web server from the outside world? For example from another web server running in another EC2 machine. Or even from the web using my web browser?
Do I need to get a public IP Address for my docker host?
Is there is another way to interact with such web server within another web server running in another EC2?
If you have a solution please explain with as many details as you can for me to understand it.
The only way that I can think of is to write a web server on the main EC2 that listens to the requests and forward them to the appropriate docker container webservers?! But that would be too many redundant codes and I would rather just request to the web server running on the container directly!
The IP address of the docker is not public. Your EC2 instance usually has a public IP address though. You need an agent listening on a port on your EC2 instance and pass it to your docker/Flask server. Then you would be able to call it from outside using ec2-instance-ip:agent-port.
It's still not a long-term solution as EC2 IPs change when they are stopped. You'd better use a load-balancer or an elastic IP if you want the ip/port to be reliable.
That's right, it makes a lot of redundant code and an extra failure point. That's why it's better to use Amazon's managed docker service (https://aws.amazon.com/ecs/). This way you just launch an EC2 instance which is a docker and has a public IP address. It still allows you to SSH into your EC2 instance and change stuff.

Is it possible to create a subdomain served by tomcat based on a domain served by Apache on ASW ec2?

I'm new to web development. I'm planning to move my wordpress site to aws, says it's "example.com". I'm also planning to create a subdomain "xxx.example.com" using spring boot. I'm wondering is that possible?
Yes it's possible but remember only one process can only listen to a port (80 for http, 443 for https) in a machine.
Two options:
Have subdomain on a different machine with different IP address for it. So you can have Wordpress on one machine and your spring application on another.
Host in same machine and have one process (Apache, or a load balancer) listen to traffic for both and send it in appropriately. This is achieved with the ProxyPass command in Apache. Having a webserver in front of an application server is often recommended anyway as can be better for security and performance reasons.
There is a third option, which is to use a non-standard port (e.g. 8443) but that just makes your URL look messy (https://xxx.subdomain.com:8443). Which might be fine if you just want to test for your own sake but not great for production applications.

Resources