We have 3 docker containers for our product in an ubuntu image.
In the first container, there is a tomcat application that provides a webservice using SOAP protocol.
For this container, we have created a network to communicate with other containers with a static IP from other containers with below command:
docker network create --driver=bridge --subnet=172.18.0.0/16 apinet
After this we have created our api with below command:
docker run --net apinet --ip 172.18.0.10 -d -p 8080:8080 api
In the api, a service is returning huge data and we are getting 504 gateway timeout error after 45 seconds.
We have tried to use the api without specifying user defined network and we received the response data succesfully.
However, we need this user defined network to give a static IP to the api.
Is there anyway to extend the gateway timeout value for a user defined network?
Thanks in advance.
Related
I build container with plumber API inside. It runs on port 80 in container and I also exposed port 80. I also mapped ports
az container create ... --ports 80:80
Since Azure only support symmetrical port mapping.
But I still cannot reach my API from container FQDN and I do not know how to troubleshoot. I already confirmed that API is running fine within the container with curl command.
Any suggestions.
did you run your plumber server with host 0.0.0.0?
Take a look at plumber official Docker image
https://github.com/rstudio/plumber/blob/master/Dockerfile
I'm trying to gain access to a netcore app in a docker container hosted in my vps remotely, but even locally I'm not able to access it.
As you can see, the app is running in a container and listenning to port 5000 (used default Kestrel config). Why can't I access it ?
What your output above is showing is that port 5000 is open, but you have not mapped anything on your local system to that port. This means that when you ping localhost on port 5000 it will not forward to the container.
Try running the container again with docker run -p 5000:5000 The output of docker ps should show something like 0.0.0.0:5000->5000/tcp.
I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.
I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)
I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)