I have two docker containers:
web, contains nginx with some static html
shiny, contains an R Shiny web application
When run, the shiny web-application is accessible through localhost:3838 on the host machine, while the static html site is accessed through localhost:80.
My goal is to make a multi-container application through docker-compose, where
users access the static html, and the static html occationally fetches data visualisations from shiny via <iframe src="url-to-shiny-application"></iframe>
I can't figure out how to point the iframe to a url that originates within the docker-compose network. Most people tend to host their Shiny apps on urls that are accessible through the Internet (ie. shinyapps.io), as a learning project I wanted to figure out a way to host a containerized shiny server alongside nginx.
Desired result would be the ability to simply write <iframe src="shiny-container/app_x"></iframe> in the static html, and it would find app_x on the shiny server through the docker network.
Is this something that can be sorted out through nginx configuration?
The answer is in your question already:
the shiny web-application is accessible through localhost:3838 on the host machine
So start the URL with http://localhost:3838. If you need this to be accessible from other hosts or you expect that published port number ever might change, you'll need to pass in a configuration option to say what the external URL actually is, or stand up a proxy in front of the two other containers that can do path-based routing.
Ultimately any URL you put in an <iframe src="...">, <a href="...">, and so on gets interpreted in a browser, which does not run in Docker. That means that references to things that might happen to be running in Docker containers in HTML content always need to use the host's hostname and the published port number; they can never use the Docker-internal hostnames.
Did you try this: https://docs.docker.com/compose/networking/?
Sample docker-compose file given:
version: "3"
services:
web:
image: web
ports:
- "8000:8000"
shiny:
image: shiny
ports:
- "3838:3838"
Each container can now look up the hostname web or shiny and get back the appropriate container’s IP address.
and use <iframe src="http://web:8000"/> or port 80, or 8080 or what you configure in the docker file.
You might be interested in "Server Side Includes", to embed web UIs of your microservices in your webpage:
https://developer.okta.com/blog/2019/08/08/micro-frontends-for-microservices#micro-frontends-to-the-rescue%EF%B8%8F%EF%B8%8F
Related
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
I'm very new to Docker (in fact I've been only using it for one day) so maybe I'm misunderstanding some basic concept but I couldn't find a solution myself.
Here's the problem. I have an ASP.NET Core server application on a Windows machine. It uses MongoDB as a datastore. Everything works fine. I decided to pack all this stuff into Docker containers and put it to a Linux (Ubuntu Server 18.04) server. I've packed mongo to a container so now its PUBLISHED IP:PORT value is 192.168.99.100:32772
I've hardcoded this address to my ASP.NET server and also packed it to a container (IP 192.168.99.100:5000).
Now if I run my server and mongo containers together on my Windows machine, they work just fine. The server connects to a container with the database and can do whatever it needs.
But when I transfer both containers to Ubuntu and run them, the server cannot connect to the database because this IP address is not available there. I've beed googling for a few hours to find a solution and still I'm struggling with it.
What is the correct way to go about thes IP addresses? Is it possible to set an IP that will be the same for a container regardless of environment?
I recommend using docker-compose for the purpose you described above.
With docker-compose, you can access the database via a service name instead of an IP (which potentially is not available on another system). Here two links to get started
https://docs.docker.com/compose/gettingstarted/
https://docs.docker.com/compose/compose-file/
Updated answer (10.11.2019)
Here a concrete example for your asp.net app:
docker-compose.yaml
version: "3"
services:
frontend:
image: fqdn/aspnet:tag
ports:
- 8080:80
links:
- database
database:
image: mongo
environment:
MONGO_INITDB_DATABASE: "mydatabase"
MONGO_INITDB_ROOT_USERNAME: "root"
MONGO_INITDB_ROOT_PASSWORD: "example"
volumes:
- myMongoVolume:/data/db
volumes:
myMongoVolume: {}
From the frontend container, you can reach the mongo db container via the service name "database" (instead of an IP). Due to the link definition in the frontend service, the frontend service will start after the linked service (database).
Through volume definition, the mongo database will be stored in a volume that persists independently from the container lifecycle.
Additionally, I assume you want to reach the asp.net application via the host IP. I do not know the port that you expose in your application so I assume the default port 80. Via the ports section in the frontend, we define that container port 80 is exposed as port 8080 on the host IP. e.g. you can open your browser and type your host IP and port 8080 e.g. 127.0.0.1:8080 for localhost and reach your application.
With docker-compose installed, you can start your app, which consists of your frontend and database service via
docker-compose up
Available command options for docker-compose can be found here
https://docs.docker.com/compose/reference/overview/
Install instructions for docker-compose
https://docs.docker.com/compose/install/
Updated answer (10.11.2019, v2)
From the comment section
Keep in mind that you need to connect via the servicename (e.g. database) and the correct port. For MongoDB that port is 27017. That would tanslate to database:27017 in your frontend config
Q: will mongo also be available from the outside in this case?
A: No, since the service does not contain any port definition the database itself will not be directly reachable. From a security standpoint, this is preferable.
Q: could you expain this
volumes:
myMongoVolume: {}
A: in the service definition for your database service, we have specified a volume to store the database itself to make the data independent from the container lifecycle. However just by defining a volume in the service section the volume will not be created. Through the definition in the volume section, we create the volume myMongoVolume with the default settings (indicated through {}). If you would like to customize your volume you can do so in this volumes section of your docker-compose.yaml. More information regarding volumes can be found here
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
e.g. if you would like to use a specific storage driver for your volume or use an external storage.
I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.
I am developing an isomorphic app. The key moment here is that js code on frontend server and on client is the same.
Suppose we have the following services:
frontend
backend
comments
database
Of course each of these services lives in it's own docker container.
And there is a need to access backend and comments services from client side (as api.app.com and comments.app.com respectively).
It seems pretty reasonable to use nginx as reverse proxy here. So these are new containers to be added:
nginx
consul
consul-template
registrator
And the last problem is to resolve *.app.com to nginx. How to achieve this without buying app.com domain? Of course solution is to add DNS to each container and to dev host. But what docker container should I use as DNS server?
Or maybe there is better architecture?
I have a GCE (Google Compute Engine) server running with the Nginx/Apache web server listing at port 80 which will serve the website. At the same time I have multiple microservices running in the same server as Docker containers. Each container will serve a website at it's appropriate local-IP Address as well as I have bind it to localhost:PORT. I don't want to bind the ports to the Public-IP address, Since it will publicly expose the microservices to the outside world.
Now the problem is, I have to embed the website pages served by the containers to the website which is running at port 80 of the web server. Since the embed code with we executed by the browser, I cannot use either the local-IP (172.17.0.x) or localhost:PORT in the python/HTML code.
Now how do I embed the pages of microservices running locally inside the containers to the website serving the users?
For Example:
My Server's Public IP: 104.145.178.114
The website is served from: 104.145.178.114:80
Inside the same server we have multiple microservices running in the local-IP like 172.17.0.1 and 172.17.0.2 and so on. Each container will have a server running inside itself which will server pages at 172.17.0.1:8080/test.html and similarly for the other containers also. Now I need to embed this page test.html to another web page which is served by the Nginx/Apache webserver at 104.145.178.114 without exposing the internal/Local-IP Port to the public.
I would like to hear suggestions and alternative solutions for this problem
I'm assuming Nginx has access to all internal docker ips (microservices). Unless I'm missing something, proxy_pass (http://nginx.org/en/docs/http/ngx_http_proxy_module.html) should work for you. You could assume a certain (externally available) url pattern to proxy to your microservice container without exposing the microservice port to the world.