Plumber in container on Azure how to map ports - r

I build container with plumber API inside. It runs on port 80 in container and I also exposed port 80. I also mapped ports
az container create ... --ports 80:80
Since Azure only support symmetrical port mapping.
But I still cannot reach my API from container FQDN and I do not know how to troubleshoot. I already confirmed that API is running fine within the container with curl command.
Any suggestions.

did you run your plumber server with host 0.0.0.0?
Take a look at plumber official Docker image
https://github.com/rstudio/plumber/blob/master/Dockerfile

Related

can't access to netcore api in docker image

I'm trying to gain access to a netcore app in a docker container hosted in my vps remotely, but even locally I'm not able to access it.
As you can see, the app is running in a container and listenning to port 5000 (used default Kestrel config). Why can't I access it ?
What your output above is showing is that port 5000 is open, but you have not mapped anything on your local system to that port. This means that when you ping localhost on port 5000 it will not forward to the container.
Try running the container again with docker run -p 5000:5000 The output of docker ps should show something like 0.0.0.0:5000->5000/tcp.

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

Docker user defined network gateway timeout

We have 3 docker containers for our product in an ubuntu image.
In the first container, there is a tomcat application that provides a webservice using SOAP protocol.
For this container, we have created a network to communicate with other containers with a static IP from other containers with below command:
docker network create --driver=bridge --subnet=172.18.0.0/16 apinet
After this we have created our api with below command:
docker run --net apinet --ip 172.18.0.10 -d -p 8080:8080 api
In the api, a service is returning huge data and we are getting 504 gateway timeout error after 45 seconds.
We have tried to use the api without specifying user defined network and we received the response data succesfully.
However, we need this user defined network to give a static IP to the api.
Is there anyway to extend the gateway timeout value for a user defined network?
Thanks in advance.

How can I use nginx as a dynamic load balancing proxy server on Bluemix?

I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)

Can I expose a Docker port to another Docker only (and not the host)?

Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1 and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net option can also be used to have a container run directly on the host's network.
Very flexible tool.

Resources